A cautionary tale is unfolding in courtrooms across the nation as artificial intelligence (AI) begins to weave its way into legal proceedings. Morgan & Morgan, a prominent U.S. personal injury law firm, recently issued an urgent warning to its extensive network of over 1,000 attorneys: using AI to generate fake case law could result in termination.
This announcement follows an incident where a federal judge in Wyoming threatened to sanction two of the firm’s lawyers for including fabricated case citations in a lawsuit against Walmart. One of the attorneys involved admitted to using an AI program that “hallucinated” the cases, attributing the error to an inadvertent mistake.
This isn’t an isolated incident. Reuters has uncovered at least seven cases over the past two years where courts have questioned or disciplined lawyers for incorporating AI-generated legal fiction into their filings. This emerging trend presents a significant challenge for both litigants and judges, especially as AI tools like ChatGPT become increasingly prevalent.
The Walmart case is particularly noteworthy due to the involvement of a well-known law firm and a major corporate defendant, highlighting the potential risks associated with AI technology in legal settings.
While generative AI offers the potential to expedite legal research and drafting, legal experts are urging caution. AI models generate responses based on statistical patterns learned from vast datasets, rather than by verifying the accuracy of the information. This can lead to the creation of false information, or “hallucinations,” as they’re known in the industry.
Andrew Perlman, dean of Suffolk University’s law school and an advocate for using AI to enhance legal work, emphasizes that attorney ethics rules require lawyers to thoroughly vet and stand by their court filings. Failure to do so, even if the misstatement is unintentional and produced by AI, can result in disciplinary action.
Several instances of AI-related legal blunders have already made headlines:
In June 2023, a federal judge in Manhattan fined two New York lawyers $5,000 for citing AI-generated cases in a personal injury case.
A New York federal judge considered sanctions against Michael Cohen, former lawyer for Donald Trump, after he mistakenly provided his attorney with fake case citations generated by Google’s AI chatbot Bard.
In November, a Texas federal judge ordered a lawyer to pay a $2,000 penalty and attend an AI ethics course for citing nonexistent cases in a wrongful termination lawsuit.
A federal judge in Minnesota recently discredited a misinformation expert after he admitted to unintentionally citing fake, AI-generated citations in a case involving a “deepfake” parody of Vice President Kamala Harris.
Harry Surden, a law professor at the University of Colorado’s law school, recommends that lawyers invest time in understanding the strengths and weaknesses of AI tools. He believes that these incidents reflect a “lack of AI literacy” within the legal profession, but stresses that the technology itself is not the root of the problem.
Read more: