In the last year or so, forms and uses of artificial intelligence have proliferated across the Internet. Users now have easy access to technology previously only seen in science fiction: one can ask a “chatbot” to write their emails, demand bespoke artwork based on user prompts, and get concise, direct answers at the top of their search results via AI-generated summaries. This new technological field provides users with more convenience and options than ever before.
Should we encourage our clients to use it? Artificial intelligence has its perks—but it has its drawbacks, too, and they are severe enough that it is well worth considering advising your clients against its use.
Consider, for example, those convenient search result summaries. How often have you Googled something for a specific result and been blindsided by nonsensical AI summaries? Recently, while searching for a specific statute on a novel legal issue, I was served a top-of-page summary packed with inaccurate and internally contradictory answers. In the software’s efforts to provide a direct answer, it failed to address the nuances and delicate complexities of the issue.
It would be impossible, in fact, for the AI software to give an appropriate answer. An appropriate answer hinges on the reader’s ability to interpret a lengthy and complicated passage of legal text and then apply the principles therein to a client’s specific set of facts. For all the ways in which AI can produce a facsimile of analysis, these are not programmable skills. They are extremely advanced skills, honed by rigorous training in both legal knowledge and the type of acumen needed to engage with it.
Simply put, AI cannot truly replicate the results of an actual professional, especially when the purpose of that professional is to guide their client through specialized issues. AI cannot account for a client’s specific circumstances nor provide for the many different factors at play in any given case. It has no mechanism for synthesizing the intersections of different jurisdictions and procedural considerations. All it can do is review previously written material and cobble together summaries of tenuous legitimacy.
Still, clients are sometimes inclined to give credence to the summaries they see at the top of their search results. It would be difficult to fault them—there is no way to turn off AI-generated summaries. Oftentimes, clients are encouraged to see if there is an accessible answer to their question or even a “starting point” before bringing it to their attorney. A reasonable person may certainly assume those plain-language paragraphs are advice as good as their attorney’s. But, of course, they would be wrong.
It is, therefore, worth advising clients against taking their AI answers at face value. Without that direction, clients are liable to rely on inaccurate instructions; they will choose not to raise a legitimate concern with their attorney or even take action against their own best interest based on the advice of those summaries. Relying on AI is a surefire way for a client to complicate their issue when, instead, they could speak directly with their attorney and receive personally tailored answers.
If an appeal to a client’s personal interest is not sufficient, there are several other arguments worth raising against AI use. First, remind them that all AI systems—even those already familiar products like ChatGPT, Grok, and Midjourney—continue to struggle through machine learning bias. IBM has defined machine learning bias, also referred to as algorithm bias or simply AI bias, as “the occurrence of biased results due to human biases that skew the original training data or AI algorithm.” (IBM, What is AI bias?, 12/22/23) Machine learning bias leads to inaccurate and even discriminatory results—particularly concerning interpersonal legal issues.
Additionally, AI is increasingly responsible for job displacement worldwide. Recent projections anticipate that over 300 million jobs in the United States and Europe will be lost to AI-related automation by 2030. (University of St. Thomas Newsroom, Generative AI’s Real-World Impact on Job Markets, 5/28/24) Some jobs will never be replaced by automation—attorneys, for example, and other highly specialized professionals cannot have their work replicated to any meaningful effect. But all engagement with AI contributes to this broad loss.
Finally, AI has devastating environmental effects that are more significant than nearly any other technology. The UN Environment Programme has found that AI deployment through data centers contributes to tremendous electronic waste, water consumption, and fossil fuel use. (UNEP, AI has an environmental problem. Here’s what the world can do about that., 9/21/24)
As AI use becomes more popular, consider cautioning clients against the practice of relying on AI for answers to legal questions. The benefit of convenience is outweighed by substandard quality in personal matters and general humanitarian concerns. If they must, encourage clients to check in with you after they solicit these services, and always emphasize that you are the best resource for these questions.