The 86-year-old head of the Catholic Church, Pope Francis, was not, in fact, spotted, wrapped in a white Balenciaga puffer jacket, a sliver crucifix hanging down his neck.
Yet, when the Midjourney-generated image circulated across the internet in March, it was met with instant glee, showing up on thousands of Instagram stories and Twitter timelines within minutes.
But the image was a “deepfake,” a type of manufactured media created by deep learning artificial intelligence technology.
Outside of popular culture, deepfakes, which have become more sophisticated and easier to create given the democratization of generative AI tools like Midjourney and DALL-E, are inevitably poised to permeate the legal process.
Within the last year, at least two separate trials have included claims from opposing parties about the evidence presented being a deepfake—in a Tesla lawsuit involving Elon Musk, and in a case related to the Jan. 6 riots involving former President Barack Obama. While both judges determined that the evidence was not manufactured, attorneys and AI experts believe the instances are likely the prologue to a much longer problem.
To be sure, it’s likely more AI-generated images will come into court as evidence. While some deepfakes will be caught by the first line of defence against inauthenticity—e-discovery professionals well-versed in the Rules of Evidence —others may be gatekept by a tech-savvy trial judge. Some deepfakes, however, will end up causing protracted “battles of experts,” or leaving unwanted impressions on a jury.
With their growing prominence, deepfakes have the potential to throw a wrench in the traditional understanding of evidence in courts. But whether the judicial system is prepared for them is an open question.
the quality of audio and visual deepfakes is only improving, and the judicial authentication process isn’t necessarily equipped to cope just yet – Lee Tiedrich, a professor of ethical technology at Duke Law
The Education Defense
Lee Tiedrich, a professor of ethical technology at Duke Law and a former partner at Covington & Burling, told Legaltech News that the quality of audio and visual deepfakes is only improving, and the judicial authentication process isn’t necessarily equipped to cope just yet.
“I think there are two sets of issues that need to be addressed,” Tiedrich said. “One is, the judge needs to decide, ‘Am I going to admit this evidence or not?’ and now that we have a lot more deepfakes in society, it puts a lot of pressure on the judges on how [to] authenticate the evidence.”
Read also: Resolving failed transactions in the digital banking sector
Of course, the obvious move on all judges’ parts should be to have a skilled and trusted technical expert on hand, someone who may be able to authenticate the evidence and keep abreast of the latest technology creating these AI-generated visuals and recordings.
Tiedrich pointed out that deepfake evidence could come into the court in two ways: the first would be at the center of a defamation case—similar to a case in Philadelphia about a woman allegedly harassing students using deepfake content. Second, would be as fake digital evidence in any civil or criminal proceeding.
In either case, judges need to be educated on the language and basic concepts behind generative AI, and the tools available for authentication. For Tiedrich, who holds workshops educating trial judges on ethics and technology, the judge is a key player who might not only set the tone for the education of counsel in the room but also for the education of juries who are likely to have to make determinations on deepfakes.
“Sensitizing the judges to the fact that deepfakes are really prevalent, and preparing them to at least be able to ask the right questions around authentication could address some issues,” she noted. “But the other thing is educating the bar. I think most lawyers want to do a responsible job and do not want to fool the court with false evidence. But sometimes they may not know the best techniques to spot a deepfake.”
‘A Widget Is a Widget’
Though deepfakes pose new problems for the court, that doesn’t mean judges and lawyers don’t have tools to tackle the nascent technology—such as the rules of evidence and technical experts.
Ron Hedges, a former magistrate judge for the District of New Jersey and the principal at Ronald J. Hedges, told Legaltech News that he believes the main issue around deepfakes in court is going to end up being about authentication, and then about admissibility.
The “gatekeeper” so to speak when it comes to authentication would be the e-discovery teams, he said. Whereas when it comes to admissibility, it would have to be the judge.
“Number one: we’ve got existing rules that courts are going to have to use, because I don’t see any new rules coming down,” Hedges said, referring specifically to Federal Rules of Evidence Rule 901. “That’s a whole series of rules about authentication.
For now, Hedges believes the current FRE 901 series, which includes admissibility of computer-generated information in Rule 13 and 14, may be enough to guide judges dealing with potential deepfake evidence.
The problem is likely going to be about long, drawn-out e-discovery battles, especially among technical experts, he said.
“I can easily see arguments being made about [not sharing] proprietary source code, trade secrets, protective orders and the like,” he noted.
For Heidi Saas, a data privacy and technology attorney at H. T. Saas, while deepfakes and evolving generative AI technology do pose new challenges, for courts who have adapted to a whirlwind of tech evolution, “a widget is a widget,” she said.
Like Hedges, she noted that FRE 902 and similar rules are in place in state courts for a reason, and she finds it unlikely that judges would be fazed by deepfakes.
Jury Impressions
While deepfake evidence could likely strain e-discovery teams and judges, its impact on juries is an open question.
Some believe it won’t be much of a problem. Saas said that juries deserve credit for being able to understand the complications of the new technology, and even if they are presented with deepfake evidence that is later thrown out, they are more discerning than many believe.
Hedges agreed, noting that the jury is a part of the process and it has to be trusted to be appropriately educated by the court and counsel. Indeed, FRE 403, the rule on excluding evidence that the bench deems prejudicial exists just for this scenario—a judge wouldn’t admit a deepfake knowingly, should they believe the impact would be detrimental.
Tiedrich, however, believes the issue is more complicated. “At the end of the day, jurors are humans,” she said. And “first impressions” are hard to shake, similar to the impact of the Pope in the puffer jacket, or fake pictures of former President Trump being arrested, she said.
“Not only do I worry that a jury won’t be able to unwind” the emotions they may feel seeing a fake image of someone attacking someone else, or a recording saying something threatening, but “if this gets to the point where we don’t have ways to quickly authenticate [evidence], I worry about access to justice issues, prolonged trials and ultimately, you end up with an expensive ‘Battle of Experts.’”
Join BusinessDay whatsapp Channel, to stay up to date
Open In Whatsapp