In a lawsuit against Avianca Airlines, the plaintiff’s lawyer used artificial intelligence chatbot ChatGPT for his legal research to create a brief including references cited from nonexistent court decisions.
The case started when a man named Roberto Mata sued the airline Avianca after he suffered injuries when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York.
When Avianca asked the court to dismiss the case, Robert Mata’s lawyers submitted a brief citing more than half a dozen relevant court decisions. However, no one, including the judge, could find the judgments or the quotations cited in the brief.
It turned out that ChatGPT had generated everything, including bogus judicial decisions, bogus quotes, and bogus internal citations, according to a report by New York Times.
Steven Schwartz, the lawyer who created the brief, recently submitted an affidavit stating he had used the AI-powered program to do his legal research, which fabricated the false information.
Schwartz, who has been a practicing lawyer in New York for three decades, told Judge Kevin Castel that he had no intentions of deceiving the court or the airline.
This lawyer is in a lot of trouble for using ChatGPT. #artificialintelligence #chatgpt #ai #openai pic.twitter.com/JkImAAlLyZ
— Edward Builds (@showprogress) May 28, 2023
Concerns Around Use of AI in Legal Profession
As AI becomes more prevalent in the legal profession, many are worried about the impact it will have on the profession and the ethical concerns surrounding its use.
Stephen Gillers, a legal ethics professor at New York University School of Law, stated that the issue was particularly acute among lawyers, who have been debating the value and the dangers of AI.
He added that lawyers needed to verify whatever information AI provides, and they couldn’t “just take the output and cut and paste it into your court filings.”
While the case of Roberto Mata V. Avianca may have been an isolated incident, it highlights the potential dangers associated with the use of generative AI in the legal profession (and in general), particularly when lawyers don’t take the time to verify the information.
How ChatGPT’s Bogus Citations Were Uncovered
The case started when Mata sued Avianca because an airline employee had hit him with a serving cart while onboard Flight 670 from El Salvador to New York on Aug 27, 2019.
Avianca subsequently asked the court to dismiss the case because the statute of limitations had expired.
In response, Robert Mata’s lawyers filed a brief in March, saying the lawsuit should continue, citing references and quotes from various court decisions, which had since been debunked.
Avianca’s lawyers informed Judge Castel’s court that they were unable to find the cases cited in the brief, and when it came to Varghese v. China Southern Airlines, they hadn’t located it by caption or citation, nor had they found cases bearing any resemblance to it.
The judge ordered Mr. Mata’s attorneys to provide copies of the opinions referred to in their brief, which they did.
Mata’s lawyers submitted a compendium of eight judgments, and in most cases, they listed the court and judges who had issued them, the docket numbers, and the dates.
However, Avianca’s lawyers told the judge they could not find these opinions on court dockets or legal databases because ChatGPT had invented everything.”
Some lawyers are rejoicing in the failure of ChatGPT, hoping that it means generative AI won’t be able to take over their profession soon.
A lawyer used ChatGPT to help draft a filing, which led to many imagined cases being cited.
The result? An embarrassing disaster.
But don’t worry, AI is ready to take over our jobs.
Source: Reason pic.twitter.com/ufSxZKJOQ5
— Markets & Mayhem (@Mayhem4Markets) May 28, 2023
While ChatGPT certainly couldn’t perform all the vital duties lawyers do on a daily basis, as seen in this very case, it can still make lawyers more efficient as long as its work is verified carefully.