In today’s discussion, I delve into a provocative question that is gaining traction: Are lawyers excessively attributing their mistakes to AI, especially when it comes to fabricated citations and quotes in legal documents?
The central inquiry here is:
This is not to imply that these legal professionals neglected to verify AI-generated content. The conjecture is that some attorneys, under the pressure of tight deadlines, might craft quotes and citations themselves, assuming they are sufficiently accurate, without involving AI at all.
- Might it be that some of these instances of errors and sloppiness in such legal filings are due to human lawyers being lax, either by design or by accident, and they end up crafting unsubstantiated quotes and citations by their own human hand, hoping they won’t get caught, and if they do get caught, they can simply and connivingly say that AI did it?
Such actions would undoubtedly breach ethical standards. It’s hard to imagine a lawyer risking their reputation in this manner (perhaps “inconceivable” as per The Princess Bride). Yet, one could argue that they might take this gamble if they believe they can easily shift blame onto AI.
AI becomes an all-too-convenient scapegoat. With widespread awareness of AI’s propensity for errors, highlighted in media reports, it becomes easy to attribute inaccuracies to technology. These machines are notorious for unpredictable behavior.
This situation paints a picture where blaming AI for errors has become somewhat normalized. Human missteps are reframed as technological glitches. AI serves as the perfect cover. Invoke AI’s specter, accept minimal repercussions, garner sympathy for being a victim of tech mishaps, and continue with your routine.
You see, this creates an ideal landscape to shrug off the fact that AI is plaguing lawyers right and left. It’s a new norm. In that sense, human errors can be recast as AI errors. AI becomes the ideal smokescreen. Just invoke the specter of AI, if needed, take your mild lumps, get sympathy that it could happen to anyone, and move on with your day.
An outrageous proposition, or does it have a potential tinge of truth and reality?
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And The Law
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the intersection of AI and the law for many years. You can find my writings not only in my Forbes column but also as posted in Bloomberg Law, ABA Law Journal, The National Jurist, The Global Legal Post, Lawyer Monthly, The Legal Technologist, MIT Computational Law Journal, and so on.
There are two major perspectives on the mixture of AI and law:
- (1) Law & AI. The application of laws to the governance and regulation of AI.
- (2) AI & Law. The application of AI to perform legal reasoning.
Thus, you can apply the law to AI, and conversely, you can apply AI to the law. For my big picture overview of both of these exciting and rapidly evolving realms, see my discussion at the link here and the link here.
When it comes to applying the law to AI, the aim is to establish suitable regulations and provide appropriate governance on how AI should be devised and implemented. There are longstanding concerns that AI makers aren’t giving due attention to the ethical ramifications of their wares. Ethical issues are construed as “soft laws” and aren’t as formidable as legally enacted laws, known as “hard laws”. To level the playing field and keep AI makers on the up-and-up, some believe that we need more AI laws.
On the other side of the coin is the application of AI to the law. This consists of using AI to aid legal activities. Lawyers tap into the latest AI to devise legal strategies, brainstorm to find creative legal arguments, draft court filings, and prepare for cases by having the AI pretend to be an able adversary. For my extensive coverage on AI for legal reasoning (AILR), see the link here.
Setting The Stage
Here’s the latest brouhaha about AI usage by lawyers.
Daily bulletins in the legal community keep highlighting lawyers who have used AI to prepare their court filings and ended up with fake legal citations and false quotations in their documents. I recently conducted a statistical analysis of the frequency of how often this is happening, see my discussion at the link here. For my overall coverage of how AI is being used by lawyers, see my comprehensive review at the link here.
These errors in legal briefings can potentially occur due to AI hallucinations. An AI hallucination is when generative AI or large language models (LLMs) such as ChatGPT, Claude, Grok, Gemini, CoPilot, Llama, and other AIs veer into generating fictitious confabulations. For my in-depth coverage of AI hallucinations, see the link here and the link here.
I had long ago predicted that attorneys using AI might get careless in their legal efforts and allow the AI to produce bogus content and not by-hand double-check what the AI has generated for them (see my prediction in 2023, at the link here). This is an easy trap for attorneys to fall into. It goes like this. You use AI frequently, it seems to do a bang-up job, so you become lulled into thinking it is perfect in every way.
The problem for attorneys is that when they submit formal court filings, they are supposed to be responsible for the contents of the filings; thus, if AI has slipped in faked citations or false quotations, the lawyer is likely to be held accountable since they didn’t catch the erroneous content. I say “likely” accountable because judges and courts have been quite lenient so far, overall, and allowed excuses such as “the computer did it”. Attorneys often incur nothing more than a minor hand slap or mild rebuke, asserting that AI is new to the legal beagles and they were caught unawares. Sometimes, the reprimands are accompanied by a modest financial sanction or penalty, which is gradually ratcheting up as these instances continue to climb.
Judges and courts are beginning to lessen their patience and sense of charity in giving lawyers the benefit of the doubt. For the time being, the matter is still generally being treated with kid gloves. The expectation is that lawyers are facing a learning curve, and a bit more time is required before they will be able to properly handle the use of AI.
The Biggest Plot Twist Of All Time
Into this milieu comes a new and highly controversial proposition.
Suppose that some of these instances are not due to AI at all. Maybe some lawyers are wise to the game that you can blame errors on AI. A lawyer might have been in a hurry to get a legal document finished and didn’t have time to source actual citations and ferret out true quotations. Thus, they faked it. By human hand, they concocted citations and quotations that sounded good enough that hopefully the contents would pass the smell test.
No use of AI was involved. AI wasn’t in the picture. The lawyer didn’t know how to use AI or assumed it would take longer to use AI than to just craft the content out of their own noggin. Perhaps they were even fearful of trying to use AI. So, make the stuff up and make it look realistic.
Your first reaction might be that this is simply impossible. No self-respecting lawyer would do this. They know it is wrong. They know they can possibly be disbarred. Their legal career might end up shredded. It just wouldn’t be sensible to take such a wild chance.
The Mind Works In Mysterious Ways
Whoa, comes the retort, you’ve got to realize that lawyers are smart and logical, which includes having this mental model in their heads:
- Calculated risk = (Low probability of detection x low penalty) – p(AI Excuse)
It is a mental calculation of rational corner-cutting under time pressure. The fictitious content might slip under the radar. The opposing side might not catch it. No catch, no harm, no foul. The probability of detection is perceived as currently being quite low.
The penalty if you are caught is relatively minimal if an AI excuse is at the ready. Just hang your head in shame and say that you have been bamboozled by AI. You are just like all those other unsuspecting lawyers who got snagged by the tricky, sneaky, atrocious, underhanded acts of AI. You plead that you, regrettably, overly relied on AI.
The calculated total risk seems to be pretty small.
Before AI Hallucinations
This behavior wouldn’t likely fly very well before the emergence of AI hallucinations. In the past, if false citations and fake quotations were discovered, the finger would be unabashedly pointed at the lawyer. The lawyer might try dodging accountability by saying that a paralegal did it or that some other clerk messed up. This wouldn’t particularly work to get the lawyer off the hook. The viewpoint was that the lawyer is where the buck stops. Period, end of story.
Though that is supposed to still be true, the gradual popularity of AI hallucinations has dramatically softened that perspective. You can’t hold a lawyer fully accountable for some nutty thing that AI did. Sure, the lawyer should have double-checked. But, hey, this AI stuff is new, and no one really understands yet how to wrestle with it.
Into this accountability gap comes some lawyers who derive an ROI. If judges and courts are willing to give leeway to AI acts, it makes absolute sense to take chances that can be counterbalanced by blaming AI.
Lack of Proof That AI Did It
Another vital factor is that few judges press to get the specifics of how AI was used, including when or how the AI hallucinations occurred. This is sometimes asked for, but rarely is it provided. Instead, the lawyer provides some vague indications. This sorrowful handwaving is taken as a sufficient explanation. It’s a dodge masquerading as an acquiescence.
No having to provide detailed logs.
No need to prove that it was due to AI hallucinations.
Judges might assume they wouldn’t know how to verify it anyway. Plus, it would consume more of the court’s time to assess the AI evidence provided. Just take the word of the lawyer and presume that if they aren’t being truthful, it will vaguely somehow someday catch up with them. They are to be given the benefit of doubt on their lawyerly oaths.
Lawyers know this. And, as part of the risk calculation, this parleys into the probability assessment. The irony is that the risk is perceived as quite low due to the handy-dandy AI excuse, while without the emergent AI excuse, the risk would be a lot higher. Human heads would normally roll. Now it’s AI that takes the pressure. Thank goodness for AI.
Putting A Stop To This
If indeed any of these human lawyering shenanigans is taking place, which we don’t know that it is, the lawyers pursuing such a gambit are on a razor’s edge. Any judge who discovers that the wool has been pulled over their eyes, and they have been told that it was AI when it was straight out of human error and had nothing to do with AI, well, expect wrath the likes of which would be fierce and unrelenting.
All told, if there is a keen interest in putting a stop to any variation of this phenomenon, whether performed by AI or by human hand, the key involves:
- Significantly increase the probability of detection (heightens risk for lawyers who allow this to happen or fail to prevent it from happening or who are playing footsies).
- Raise the stakes by significantly increasing penalties (again, amplifies risk).
- Force verification that these were indeed AI hallucinations (distinguish human error from AI errors).
- Recast AI hallucinations as a form of human error (i.e., the lawyer did not catch the AI hallucinations and therefore they committed a human error).
- No longer treat even first offenses as generally tolerable (bring down the hammer at the get-go, sending a chilling signal to all).
- Reinforce that lawyers are responsible for their filings (an already existing rule that shouldn’t be abridged due to the advent of AI).
- Establish mandatory certification requirements (must accompany filings and must provide third-party certification regarding all citations and quotations).
We come back to where the legal field needs to focus — Did the attorney fulfill their duty of competence and reasonable inquiry? Under rules like ABA Model Rule 1.1 (competence) and Rule 3.3 (candor toward the tribunal), the answer must be yes regardless of what tools were used.
Final Thoughts For Now
Some would fervently argue that arduous tightening would be exceedingly harsh and unfair to lawyers. You are making a mountain out of a molehill. Give lawyers a break. Others would respond that leniency has already partially opened Pandora’s box. Close this before things get utterly out of hand. Tough love is needed. Allowing leeway has been compassionate, but the result is an encouragement of unsavory behavior.
As the adage goes, spare the rod, spoil the lawyer.
Famed motivational speaker Jim Rohn made this notable remark about excuses: “Excuses are the nails used to build a house of failure.” Using AI as an excuse might seem reasonable, but it is a house of cards. As Rohn also noted: “If you really want to do something, you’ll find a way. If you don’t, you’ll find an excuse.”







