Share this @internewscast.com
The blunder in the Supreme Court of Victoria, Australia is another in a litany of mishaps AI has caused in justice systems around the world.
MELBOURNE, VIC — An apology was issued by a prominent Australian lawyer to a judge after inadmissible submissions were filed in a murder trial, containing fabricated quotes and invented case judgments produced by artificial intelligence.
The blunder in the Supreme Court of Victoria state is another in a litany of mishaps AI has caused in justice systems around the world.
Rishi Nathwani, a highly respected King’s Counsel in the defense, accepted complete responsibility for submitting erroneous information in the case of a teenager accused of murder, as noted in court records reviewed by The Associated Press on Friday.
“We are deeply sorry and embarrassed for what occurred,” Nathwani told Justice James Elliott on Wednesday, on behalf of the defense team.
The inaccuracies brought about by the AI led to a 24-hour postponement in reaching a resolution in the case Elliott aimed to finalize on Wednesday. On Thursday, Elliott determined that Nathwani’s client, whose identity is protected due to their age, was not guilty of murder by reason of mental incapacity.
“At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,” Elliott told lawyers on Thursday.
“The court’s trust in the accuracy of counsel submissions is crucial for the proper functioning of justice,” Elliott emphasized.
The erroneous submissions included invented quotes from a supposed legislative speech and nonexistent case references allegedly from the Supreme Court.
The errors were discovered by Elliott’s associates, who couldn’t find the cases and requested that defense lawyers provide copies.
The lawyers admitted the citations “do not exist” and that the submission contained “fictitious quotes,” court documents say.
The lawyers explained they checked that the initial citations were accurate and wrongly assumed the others would also be correct.
The submissions were also sent to prosecutor Daniel Porceddu, who didn’t check their accuracy.
The judge noted that the Supreme Court released guidelines last year for how lawyers use AI.
“It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,” Elliott said.
The court documents do not identify the generative artificial intelligence system used by the lawyers.
Similarly, in the United States in 2023, a federal judge levied $5,000 penalties against two lawyers and their law firm when ChatGPT was identified as the source of fictitious legal documentation in an aviation injury lawsuit.
Judge P. Kevin Castel said they acted in bad faith. But he credited their apologies and remedial steps taken in explaining why harsher sanctions were not necessary to ensure they or others won’t again let artificial intelligence tools prompt them to produce fake legal history in their arguments.
Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for U.S. President Donald Trump. Cohen took the blame, saying he didn’t realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
British High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the “most egregious cases,” perverting the course of justice, which carries a maximum sentence of life in prison.
Copyright 2025 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.