Lawyer Discovers ChatGPT Created Fake Cases For His Legal Research

While the cutting-edge artificial intelligence (AI) chatbot, ChatGPT has been greatly appreciated for its quick and efficient customer service to personalized learning experiences, at the same time, it has also been criticized for being used for fraudulent purposes, spreading misinformation, as well as producing inaccurate information.

In one such case, a New York-based lawyer whose company – Levidow, Levidow & Oberman – used ChatGPT for legal research regarding a case is now facing a court hearing after it was found that some of the legal cases referenced by the AI tool did not exist.

What Is The Case About?

The original case involved a man named Roberto Mata who sued the Columbian airline Avianca, saying that he was injured when a serving cart struck his knee on Aviance Flight 670 from El Salvador to New York on August 27, 2019.

When Avianca recently asked a Manhattan federal judge to dismiss the case, Mata’s legal team filed a 10-page brief citing several previous court cases supporting their argument on why the suit should proceed based on precedent.

The brief cited more than half a dozen court decisions, including “Varghese v. China Southern Airlines,” “Zicherman v. Korean Air Lines,” “Martinez v. Delta Airlines”, “Miller v. United Airlines,” and a few other cases.

On investigation, the airline’s lawyers discovered that some of the court decisions referenced in the brief by Mata’s lawyers did not exist and immediately notified the presiding judge in a letter.

Judge P Kevin Castel of the Southern District of New York, presiding over the case, subsequently contacted the legal team representing the man. He said the court was faced with an “unprecedented circumstance” while expressing surprise at the incident.

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Castel wrote in an order demanding an explanation from the plaintiff’s legal team.

Over the course of several filings, it was found that the research in question was not done by Peter LoDuca, the lawyer for the plaintiff, but rather by one of his colleagues, Steven A. Schwartz, at the same law firm.

Schwartz, who has been practicing law in New York for more than 30 years, used ChatGPT to look for similar previous cases that bore a resemblance to Mata’s case.

In an affidavit filed on Thursday, Schwartz said he had used the AI tool to “supplement” his research for the case and “greatly regrets” using it. He also added that he “had never previously used AI for legal research and was unaware that its content could be false.”

The lawyer even shared screenshots to the judge of his conversations with ChatGPT wherein the former even confirmed if a specific case, Varghese v. China Southern Airlines Co Ltd, was real with the AI tool responding that it was.

The ChatGPT even prompted “S” to ask: “What is your source”. The AI tool replied that “upon double-checking”, the case was real and can be found in “reputable legal databases,” including Westlaw and LexisNexis. It even says that the other cases provided to Schwartz are also real.

Schwartz accepted his mistake and promised to never use AI to “supplement” his legal research in the future “without absolute verification of its authenticity”.

In his written statement, Schwartz also stated that LoDuca had no role in the research of the relevant cases and was unaware of how it had been carried out.

The presiding judge over the case has ordered the plaintiff’s lawyers to explain why they should not be sanctioned at a June 8th hearing.

Source: NYT

Subscribe to our newsletter

To be updated with all the latest news

Kavita Iyer
Kavita Iyerhttps://www.techworm.net
An individual, optimist, homemaker, foodie, a die hard cricket fan and most importantly one who believes in Being Human!!!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe to our newsletter

To be updated with all the latest news

Read More

Suggested Post