• Saturday, December 21, 2024
businessday logo

BusinessDay

The problem with the ‘Bogus’ ChatGPT legal brief? It’s not the tech

Ten ways to boost cognitive health

ChatGPT is in the headlines yet again—and not for a good reason.

While many were unwinding over the Memorial Day weekend, the legal tech world was stirred up by news that first spread via The New York Times and The Verge: a lawyer had used ChatGPT to write a legal brief and filed it in New York federal court. The problem? It was full of fake case citations, and the lawyer never bothered to check them before filing.

Not surprisingly, the story garnered a wide range of reactions. Too many of them took the incident as a sign that generative AI is not ready for use in legal work.

Those takes miss the point. As has been the case since the dawn of the profession, lawyers are responsible for the work they provide to clients and courts. No technology, no matter how impressive or magic-seeming, absolves lawyers of their professional or ethical duties.

The ChatGPT Brief and Ensuing Debacle

The unfortunate brief was filed in the case of Mata v. Avianca in the U.S. District Court for the Southern District of New York. The case centres on alleged injuries suffered by the plaintiff when he was struck by a beverage cart during an Avianca flight.

How we got to where we are today was a domino effect of bad decisions by Mata’s lawyer, Steven A. Schwartz of Levidow, Levidow & Oberman, after Avianca moved to dismiss the case. Because Schwartz is not admitted to the Southern District of New York, he is joined in the case by co-counsel Peter LoDuca. The highlights of the timeline are:

● LoDuca filed an opposition to the motion, including case citations.

● Defense counsel filed a reply memorandum on the same motion, stating that it could not find many of the cases cited by Mata’s attorneys and suggesting that they might not exist.

● The court ordered Mata’s attorneys to provide copies of the cases cited.

● LoDuca responded and provided decisions for the cited cases, which he noted: “may not be inclusive of the entire opinions but only what is made available by online database”.

● Defense counsel submitted a letter to the court asserting that many of the cases were fabricated and nonexistent.

● The court issued an order to show cause as to why LoDuca shouldn’t be sanctioned, citing “an unprecedented circumstance” involving a court filing “replete with citations to non-existent cases” and the submission of “bogus judicial decisions with bogus quotes and bogus internal citations.”

● LoDuca submitted an affidavit saying that Schwartz was entirely responsible for handling the case, including the research in question.

● Schwartz submitted an affidavit saying he was entirely responsible for the case, admitting to using ChatGPT to generate the brief and the cases (more on that below), and stating that LoDuca had no responsibility for the research in question.

Now, both attorneys are ordered to appear for a sanctions hearing on June 8. Schwartz has retained attorneys from Frankfurt Kurnit Klein & Selz to represent him at the hearing.

The ‘Bogus’ Brief Is a lawyer problem, not a tech problem

While there’s no dispute that ChatGPT generated the fake case citations and decisions at issue, we likely never would have heard about it if Schwartz hadn’t made serious mistakes at multiple stages in the litigation.

Even removing the technical aspect from the story—which, admittedly, is difficult to do—Schwartz’s professional conduct in not checking the veracity of what he submitted to the court was his first mistake and a professional failing in its own right. No partner or senior associate would submit the work of a junior associate without checking their work. Work generated by technology is no different.

“This person did not need to know anything about AI at all to know what his obligations were as a litigator,” explained Laura Safdie, COO and general counsel of Casetext. “None of this has to do with the technology itself. It has to do with a lawyer who didn’t live up to their obligations as a litigator, which is very clear to all of us, regardless of what technology you’re working with.”

The tendency to think of AI as magic and the impressiveness of its outputs do not absolve lawyers of their obligations. “Generative AI is a tool. It’s an exceptionally powerful tool, but it’s a tool,” Safdie said. “It doesn’t replace the lawyer. It doesn’t replace the lawyers’ ethical obligation to their clients. It just changes the way that the lawyer lives out that responsibility. So for instance, if I was doing traditional legal research on Westlaw or LexisNexis, in no world am I just taking the output of my search and copying and pasting it into a brief. Similarly, if you’re doing legal research powered by generative AI, you also should not copy and paste the output and put it into a brief.”

This leads to the central issue of technical incompetence in this case. Schwartz admitted that “the citations and opinions in question were provided by Chat GPT [sic] which also provided its legal source and assured the reliability of its content.” He further called ChatGPT “a source that has revealed itself to be unreliable.”

However, it was widely known that ChatGPT was unreliable and prone to hallucinations well before Schwartz relied on it in March 2023. And even if Schwartz had somehow missed those warnings, he should have had an inkling that there might be an issue when his opponent flagged the case citations to the court and he was asked to produce the decisions.

Rather than stop to question it, however, Schwartz doubled down on ChatGPT, using it to generate the nonexistent “decisions.” When it came time to verify that the decisions were accurate, who did he ask? You guessed it—ChatGPT.

For many, the repeated reliance on the tool without stopping to consider its accuracy or check its outputs is what makes this incident such an egregious cautionary tale. “The initial use of ChatGPT is unfortunate but comprehensible. The failure to perform even the most rudimentary spot-checking is inexcusable but not that surprising. The subsequent fabrication of cases is something else entirely. That is willful misconduct,” said D. Casey Flaherty, co-founder and chief strategy officer at LexFusion.

While Schwartz is currently facing sanctions under the Federal Rules of Civil Procedure, it is very likely that he could face disciplinary action as well. The American Bar Association has imposed a professional responsibility on attorneys to be competent in practice in Rule 1.1 of the Model Rules of Professional Conduct. A majority of states have adopted a similar rule.

Read also: Law firms as social enterprises — the future, or just a CSR pipedream?

Comment 8 to Rule 1.1 makes it clear that competence includes technology competence:

“To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.”

Understanding technology means not only understanding how it works but also what its shortcomings are.
If there was any doubt that the duty of technology competence includes advances in AI, the ABA adopted Resolution 112 in 2019, long before ChatGPT was released:

“RESOLVED, That the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.”

In Schwartz’s case, such controls and oversight were as nonexistent as the cases he cited.

 The tendency to think of AI as magic and the impressiveness of its outputs do not absolve lawyers of their obligations. Generative AI is a tool. It’s an exceptionally powerful tool, but it’s a tool.

ChatGPT and Generative AI Are not synonymous, and lawyers need to understand the difference

Following the news of the ChatGPT brief and the upcoming sanctions hearing, social media was flooded with reactions ranging from hot takes to thoughtful discussions.

One of the recurring hot takes has been that this incident is a signal that lawyers should not be using AI. That interpretation of events belies a fundamental misunderstanding of generative AI and how it works.

In too many discussions, “ChatGPT” has come to be used as shorthand for all generative AI, almost like it’s the Band-Aid or Kleenex of AI. It’s critical to understand, however, that that is inaccurate.

ChatGPT is a public chatbot. The potential pitfalls of using it in legal, beyond hallucinations, have been well documented—including possible ethics, privilege and confidentiality concerns.

To counteract those concerns, legal technology providers have been creating generative AI-powered tools specifically designed for use in legal. The industry is also focused on fine-tuning publicly available AI models to meet the needs of the legal profession and appropriately power legal work.

Even with those legal-specific tools, however, blind reliance on AI of the kind Scwartz exhibited would still be inexcusable. “No matter how powerful an AI tool is, no matter how tailored it is for the practice of law—which ChatGPT is not, by the way—in no circumstances is anyone who supports AI or who builds these tools indicating that you should just be copying and pasting,” Safdie said.

“While this may be an unfortunate incident, it should not be construed as an argument against the use of generative AI in the legal field,” agreed Brandi Pack, legal tech analyst and AI consultant at UpLevel Ops. “Instead, it underscores the importance of legal organizations implementing appropriate AI tools as well as providing proper training for their staff. Legal tech vendors are acutely aware of potential risks related to data privacy and system hallucinations and understand how to mitigate them.”

For people like Safdie who work with AI-powered legal tools on a daily basis, it is “disappointing, but not surprising” that some have taken the ChatGPT brief incident as a reason to argue against the use of AI in the legal industry. “It mirrors the kind of fearmongering that you often see in response to new technologies generally,” she said. “Anytime something is new, we get scared and we say—see it’s broken, and no one should touch it.”

In reality, however, “you need to use your own professional common sense, and the ethical obligations that you’re used to maintaining, irrespective of what technology you’re working with,” Safdie concluded.

Join BusinessDay whatsapp Channel, to stay up to date

Open In Whatsapp