Back to Blog
Abstract geometric visualization of fragmenting legal documents symbolizing AI hallucination risks in court filings
ai

1,227 Fabricated Citations and Counting: Inside the AI Hallucination Crisis Reshaping Legal Practice

April 2, 2026

From a DOJ attorney fired for filing AI-fabricated quotes to the Sixth Circuit imposing $30,000 in sanctions, courts are confronting an epidemic of AI-generated fake legal authorities. With over 1,200 documented cases worldwide, here's what every litigation team needs to know -- and do -- right now.

By Sid Newby | April 2026

In more than twenty years of building litigation technology, I've seen every manner of filing error: transposed exhibit numbers, wrong court rules cited, outdated case law submitted as controlling authority. These mistakes are as old as legal practice itself, and the profession has always had mechanisms -- from Rule 11 sanctions to bar discipline -- to address them. But what we're witnessing right now is something categorically different. We're watching an entirely new class of filing error emerge at a scale and velocity the courts have never encountered: AI-generated legal authorities that look perfect on the page and do not exist anywhere in the law. And the crisis is accelerating far faster than most practitioners realize.


The scale of the problem: 1,227 cases and counting

The most comprehensive effort to track AI hallucinations in court filings is the database maintained by Damien Charlotin, a research fellow at HEC Paris's Smart Law Hub. As of early 2026, his database has cataloged 1,227 cases globally in which generative AI produced hallucinated content that was submitted to courts.[1]

The numbers are staggering in their specificity:

And these are only the cases that were caught. The actual incidence is almost certainly much higher -- courts don't routinely verify every citation in every filing, and many fabricated authorities may pass undetected, particularly in cases that settle or where the opposing party lacks the resources to check.

The trajectory is what should concern every litigation professional. In January 2026, the Social Science Space reported that the database had reached 719 incidents.[2] In the roughly three months since, that number has grown by more than 500 cases -- an acceleration rate of approximately five to six new documented cases per day.

MetricCount
Total global cases1,227
United States811
Fabricated case citations1,022
False quotes from real cases323
Misrepresented holdings492
Lawyers implicated470
Pro se litigants implicated725
Largest single monetary penalty$109,700

Table 1: AI hallucination cases by the numbers. Source: Damien Charlotin's AI Hallucination Cases Database.[^1]

AI hallucination crisis scale

Figure 1: The accelerating growth of documented AI hallucination cases in court filings worldwide.


Anatomy of a hallucination: what goes wrong

To understand why this crisis is so insidious, you need to understand the taxonomy of what AI gets wrong in legal filings. The Charlotin database classifies incidents into several distinct categories, and the distinctions matter because they require different detection strategies.[1]

Fabricated citations

The most common and most dangerous category. The AI generates a case name, reporter citation, and year that look authentic but correspond to no actual case. The format is perfect -- "Smith v. Jones, 547 F.3d 892 (7th Cir. 2008)" -- complete with a plausible court, volume, and page number. But the case simply does not exist. This accounts for 1,022 of the 1,227 documented incidents.

False quotes from real cases

The second category is more subtle and arguably more dangerous: the AI identifies a real case but fabricates quotations or holdings that the case never contained. An attorney checking the citation might verify that the case exists and move on, never realizing that the specific language attributed to it was invented. The database documents 323 instances of this type.

Misrepresented holdings

In 492 cases, the AI cited real cases for propositions they do not actually support. The case is real. The quote may even be real. But the legal principle the AI claims the case stands for is wrong. This is the category most likely to evade detection, because it requires not just citation checking but substantive legal analysis of the cited authority.

Why do large language models hallucinate?

Large language models generate text by predicting the most statistically likely next token in a sequence. They have no concept of "truth" or "accuracy" -- they're producing patterns that look like legal citations because they were trained on millions of documents containing legal citations. When the model encounters a gap in its training data or a prompt that pushes it beyond its reliable knowledge, it doesn't say "I don't know." It generates a plausible-looking citation that follows the syntactic patterns it has learned. The result is a fabrication indistinguishable from genuine legal authority to anyone who doesn't independently verify it.


The cases that changed everything

While the Charlotin database documents over a thousand incidents, several cases in late 2025 and early 2026 have crystallized the crisis in ways that are reshaping court rules, bar ethics guidance, and litigation practice.

The Sixth Circuit's $30,000 message

In March 2026, the Sixth Circuit Court of Appeals issued what may be the most significant sanctions order yet in an AI hallucination case. In Whiting v. City of Athens, Tennessee, a three-judge panel -- Judges Jane B. Stranch, John K. Bush, and Eric E. Murphy -- sanctioned Tennessee attorneys Van R. Irion and Russ Egli for submitting briefs containing more than two dozen fake or misrepresented citations across three consolidated appeals.[3]

The sanctions were the stiffest the court could impose:

The court's language was unambiguous: "A fake opinion is not existing law, and citation to a fake opinion does not provide a non-frivolous ground for extending, modifying, or reversing existing law."[3]

What makes this case particularly significant is the court's explicit acknowledgment that prior, lighter sanctions in other cases had failed to deter the behavior. The Sixth Circuit was, in the words of one commentator, "sending the loudest message possible that this type of conduct is not allowed in our court or any other."[4]

The problem categories in the Whiting briefs read like a checklist of everything that can go wrong:

Both attorneys had prior disciplinary history for lack of candor, and their responses to the court's show cause order were described as demonstrating a "stunning lack of respect for this court."[5]

The DOJ attorney who got fired

If the Sixth Circuit case demonstrated the judiciary's willingness to impose severe sanctions, the case of Assistant U.S. Attorney Rudy Renfer demonstrated that even the federal government is not immune.

In late 2025, Derence Fivehouse -- a retired Air Force colonel, longtime military lawyer, and pro se plaintiff suing over TRICARE coverage limits for GLP-1 medications -- noticed something wrong with a government brief filed in the Eastern District of North Carolina. The language in the filing "did not read like the sources it cited." When Fivehouse checked the authorities, he found that quotations attributed to appellate decisions and the Code of Federal Regulations did not appear anywhere in those sources.[6]

Magistrate Judge Robert T. Numbers II issued a sharply worded order finding fabricated quotations, misstatements of case holdings, and false or misleading statements about how the errors occurred. Renfer initially claimed he had accidentally filed an unfinished draft, but subsequently admitted he had lost a prior version of the filing, "felt panicked," had AI rewrite it, and filed it thinking he had reviewed it.[6]

The consequences were swift and severe. The Department of Justice fired Renfer -- he was terminated before he could resign or retire. The U.S. Attorney for the Eastern District of North Carolina confirmed the termination, and the DOJ referred the matter to the Office of Professional Responsibility.[7]

The Renfer case carries a particular sting for the legal profession because it wasn't a solo practitioner or a small-firm attorney cutting corners. It was an Assistant United States Attorney -- a federal prosecutor representing the government of the United States -- who submitted fabricated legal authorities in a case against a pro se plaintiff. The irony that the fabrication was caught by the unrepresented party rather than by government quality controls has not been lost on commentators.


Who's getting caught: the data tells a troubling story

A landmark October 2025 analysis by Stanford's Cyberlaw Center examined 114 documented cases of AI-tainted filings and produced findings that should concern anyone who cares about access to justice.[8]

It's overwhelmingly a small-firm problem

The firm size distribution is stark:

Firm SizePercentage
Solo practitioners50.4%
2-25 attorneys39.5%
Combined small firms89.9%
26-100 attorneys3.1%
201-500 attorneys2.3%
1,001+ attorneys1.6%
Government entities1.6%

Table 2: AI hallucination incidents by firm size. Source: Stanford Cyberlaw Center.[^8]

Nearly 90% of documented incidents involve solo practitioners or firms with fewer than 25 attorneys. This isn't because large firms don't use AI -- they do, increasingly. It's because large firms have the resources for AI governance committees, verification workflows, training programs, and quality control systems that catch errors before they reach a courtroom.

Plaintiffs are more vulnerable

56% of cases involved plaintiff's counsel, compared to 31% involving defense counsel. This asymmetry likely reflects the economics of plaintiff-side practice, where solo practitioners and small firms are more likely to represent plaintiffs on contingency, with tighter budgets and fewer support staff for citation verification.[8]

The AI tools involved

Of the 34 cases where the specific AI tool was identified:

The dominance of general-purpose chatbots (ChatGPT, Claude) rather than legal-specific tools underscores a key point: the crisis is driven not by flaws in legal AI products specifically, but by attorneys using general-purpose AI tools for legal research without understanding their limitations.[8]

The access-to-justice dimension

Perhaps the most troubling finding is that 59% of all documented hallucination cases involve pro se litigants -- people representing themselves without an attorney.[1] These are individuals who turned to AI precisely because they couldn't afford legal representation, only to have the technology generate fake authorities that undermined their cases.

This creates a cruel paradox at the heart of legal AI adoption: the people who most need AI's help with legal research are the people least equipped to verify its outputs, and thus the people most likely to be harmed by its failures.

Access to justice impact

Figure 2: The disproportionate impact of AI hallucinations on pro se litigants and small-firm practitioners.


The courts respond: 300+ standing orders and a patchwork of rules

The judicial response to the AI hallucination crisis has been swift but fragmented. As of early 2026, more than 300 federal judges have adopted standing orders or local rules specifically addressing generative AI use in court filings.[9]

The spectrum of judicial approaches

Court responses fall broadly into three categories:

Disclosure-only orders require parties to disclose when generative AI was used for legal research or drafting and to identify the specific tools used. Some courts require granular identification -- "ChatGPT-4" or "Claude," not just "AI software."

Disclosure plus certification orders require both disclosure of AI use and a certification that every citation has been independently verified against primary sources. This is the model adopted by Florida's largest judicial circuits, which now require attorneys and self-represented litigants alike to certify that "all citations and facts were independently verified."[10]

Verification mandates go further, requiring affirmative statements that all authorities exist and accurately represent the law, regardless of whether AI was used. This approach recognizes that the duty to verify citations isn't new -- it's always been part of an attorney's obligation under Rule 11.

The hyperlink rule proposal

One of the most interesting proposals to emerge from the crisis is the mandatory hyperlink rule advocated by the National Law Review. Under this proposal, every case citation in a court filing would be required to include a hyperlink to the actual opinion in a verified legal database.[11] The logic is elegant: a fabricated case has no URL to link to, so the requirement would catch hallucinated citations at the drafting stage rather than after filing.

Judges are using AI too

In a development that adds complexity to the regulatory picture, a 2026 random-sample survey of federal judges found that more than 60% of responding judges used at least one AI tool in judicial work.[9] This creates an interesting dynamic: the judiciary is simultaneously regulating attorney AI use while increasingly relying on AI itself. The long-term implications of this dual role -- as both regulator and user of AI -- are still unfolding.


What the hallucination crisis means for eDiscovery and document review

For litigation teams that rely on AI-assisted document review -- which, in 2026, encompasses virtually every major eDiscovery workflow -- the hallucination crisis carries lessons that extend well beyond citation checking.

The verification imperative applies everywhere

If a large language model can fabricate a case citation that looks indistinguishable from a real one, it can also:

The hallucination problem is not limited to research and drafting. It is inherent to the technology itself. Every AI-assisted output in every stage of the EDRM lifecycle requires a verification framework proportional to the consequences of error.

TAR is different from generative AI -- but the lesson is the same

It's worth noting that Technology-Assisted Review (TAR) -- the supervised machine learning approach to document review that has been validated by courts since Da Silva Moore v. Publicis Groupe (2012) -- operates on a fundamentally different architecture than generative AI. TAR systems don't generate text; they classify documents based on patterns learned from attorney coding decisions. They produce relevance scores, not prose.

But the hallucination crisis has heightened judicial and client scrutiny of all AI in litigation, including TAR. Litigation teams that can demonstrate rigorous validation protocols, statistical sampling, and quality control workflows for their AI-assisted review will have a significant competitive advantage over those that treat AI as a black box.

Governance frameworks are no longer optional

The firms and corporate legal departments that are navigating the hallucination crisis most effectively share a common characteristic: they had AI governance frameworks in place before the crisis hit. These frameworks typically include:


The path forward: from crisis to competence

The AI hallucination crisis is not an argument against using AI in legal practice. The technology's benefits -- in eDiscovery, in research, in drafting, in case analysis -- are too substantial to abandon. But it is a decisive argument for using AI responsibly, with verification systems, governance frameworks, and a clear-eyed understanding of the technology's limitations.

What litigation teams should do now

Immediately:

Within 90 days:

Strategically:

The bigger picture

Here's what I keep coming back to after twenty years in this industry: the AI hallucination crisis is, at its core, a quality control problem. And quality control is something the litigation technology industry should be very good at. We've spent decades building defensible processes for document collection, processing, review, and production. We have statistical validation methodologies, sampling protocols, and quality metrics that are accepted by courts worldwide.

The challenge now is to apply that same rigor -- that same insistence on defensibility, transparency, and verifiability -- to the AI tools that are rapidly becoming integral to every stage of litigation support. The firms and technology providers that figure this out first won't just avoid sanctions. They'll build the trust that clients, courts, and regulators are demanding.

The 1,227 cases in Damien Charlotin's database aren't just cautionary tales. They're the growing pains of an industry learning to wield genuinely transformative technology responsibly. The question isn't whether AI belongs in legal practice. It's whether we're willing to do the hard work of using it right.


[1]Damien Charlotin, "AI Hallucination Cases Database," HEC Paris Smart Law Hub. AI Hallucination Cases Database
[2]Social Science Space, "A Status Check on Hallucinated Case Law Incidents" (January 2026). A Status Check on Hallucinated Case Law Incidents
[3]Robert Ambrogi, "Sixth Circuit Slaps Steep Sanctions on Two Lawyers for Fake Citations and Misrepresentations in Appellate Briefs," LawNext (March 2026). LawNext
[4]National Law Review, "Sixth Circuit Sanctions Attorneys for Fake Citations -- What Does This Mean for Use of AI?" (March 2026). National Law Review
[5]Eugene Volokh, "Lawyers Citing Nonexistent Cases Ordered to Pay Opponents' Attorney Fees, Double Costs, $15K Fine Each," Reason (March 2026). Reason
[6]FindLaw, "DOJ Attorney's AI-Generated Brief Sparks Sanctions Threat After Pro Se Plaintiff Uncovers Fabricated Quotes." FindLaw
[7]ABA Journal, "Federal Prosecutor Resigns After AI Errors Found in Court Filings." ABA Journal
[8]Stanford Cyberlaw Center, "Who's Submitting AI-Tainted Filings in Court?" (October 2025). Stanford Cyberlaw
[9]Bloomberg Law, "Federal Court Judicial Standing Orders on Artificial Intelligence." Bloomberg Law
[10]Steno Imperium, "The New AI Disclosure Mandate in Florida Courts -- Transparency, Accountability, and the Future of Legal Practice" (February 2026). Steno Imperium
[11]National Law Review, "Preventing Fabricated AI Legal Authorities: The Case for a Mandatory Hyperlink Rule." National Law Review
[12]eDiscovery Today, "Vast Conspiracy Accusations Lead to Severe Sanctions in AI Hallucinations Case" (March 2026). eDiscovery Today