Back to Blog
Abstract geometric visualization of scales of justice balanced between traditional and AI-powered legal analysis with data flow patterns
ai

From the Bench to the Brief: 61% of Federal Judges Are Using AI -- and It's Reshaping What Courts Expect from Litigators

April 2, 2026

A landmark Northwestern-NYC Bar study reveals that most federal judges have adopted AI tools, yet training gaps and policy fragmentation persist. Meanwhile, courts are splitting on when litigators can -- and cannot -- use AI in discovery. Here's what the data means for every litigation team.

By Sid Newby | April 2026

In more than two decades of building litigation technology systems, I've learned that the judiciary sets the tempo for every meaningful shift in how we practice law. When judges started expecting electronic filings, firms digitized. When courts demanded Bates-numbered productions, the industry built processing pipelines. And now, with a landmark study revealing that a majority of federal judges have quietly adopted AI tools in their own chambers, the implications for every litigator, every eDiscovery team, and every trial support operation are profound -- and largely unexamined. The bench isn't waiting for the bar to figure out AI. The bench is already using it.


The survey that changed the conversation

On March 27, 2026, researchers at Northwestern University and the New York City Bar Association published what may be the most consequential empirical study of judicial technology adoption in a generation. Their paper, "Artificial Intelligence in Federal Courts: A Random-Sample Survey of Judges," appeared in The Sedona Conference Journal, Volume 27, and it delivered a finding that should command the attention of every litigation professional in the country.[1]

61.6% of responding federal judges reported using at least one AI tool in their judicial work.[1]

That number alone is remarkable. But the details beneath it reveal a judiciary in transition -- adopting AI tools faster than many assumed, yet doing so without consistent training, without uniform policies, and without clear guidance on what litigants should know about how their cases are being handled.

The study surveyed 502 randomly selected federal judges -- spanning bankruptcy, magistrate, district court, and court of appeals judges -- during a three-week window in December 2025. Of those, 112 responded, yielding a 22.3% response rate with a margin of error of approximately \u00b19% at the 95% confidence level.[1][2]

The researchers -- led by Daniel Linna of Northwestern's Pritzker School of Law and McCormick School of Engineering -- deliberately designed the study to capture what's actually happening in chambers, not what judges think should be happening. And what's actually happening is more nuanced than any headline can capture.

Federal judges AI adoption rates

Figure 1: Federal judges' AI tool usage frequency, from the Northwestern-NYC Bar Association survey of 112 responding judges (December 2025).


Inside the numbers: who's using what, and how often

The headline finding -- 61.6% adoption -- masks significant variation in frequency, tool selection, and application. Understanding these patterns is essential for litigation teams trying to calibrate their own AI strategies.

Usage frequency

While a majority of judges have used AI, only a fraction have made it a regular part of their workflow:[1]

FrequencyPercentage
Daily5.4%
Weekly17.0%
Monthly19.6%
Rarely19.6%
Never38.4%

Table 1: AI usage frequency among federal judges. Source: Northwestern-NYC Bar Association Survey, 2026.[^1]

Only 22.4% of judges use AI tools on a daily or weekly basis. The largest single group -- 38.4% -- has never used AI tools at all. This suggests that while AI has reached the federal judiciary, it hasn't yet become infrastructure. It's more akin to an experimental tool that a meaningful minority has integrated and a larger group has tried but not routinized.

Tool preferences

The tool preferences are particularly revealing for the legal technology industry:[1]

Federal judges AI tool preferences

Figure 2: Federal judges' AI tool preferences, showing the breakdown between legal research platforms, general-purpose LLMs, and other AI tools.[^1]

The dominance of Westlaw AI-Assisted Research tells an important story. Judges are gravitating toward AI tools embedded in platforms they already trust -- platforms with established reputations for citation accuracy and legal authority. The relatively high ChatGPT adoption (28.6%) suggests that judges are also experimenting with general-purpose models, likely for tasks where legal-specific precision is less critical.

Primary use cases

What judges are actually doing with AI tools reveals the technology's current role in judicial work:[1][2]

The fact that legal research dominates is unsurprising -- it's the use case where AI tools deliver the most immediate value with the lowest risk. But the document review and summarization numbers are significant. When 15.5% of judges are using AI to review documents, they're developing firsthand experience with the capabilities and limitations of AI-assisted review -- the very technology that litigators are increasingly deploying in eDiscovery workflows.

And the near-absence of AI in drafting filed documents (1.8%) reflects a judiciary that is cautious about placing AI-generated text into the formal record. This caution is well-founded, but it may also be temporary.


The training gap: a judiciary adopting faster than it's learning

Perhaps the most concerning finding in the Northwestern study is the training gap. Nearly half of surveyed judges -- 45.5% -- reported that AI training had not been provided by court administration. An additional 15.7% were unsure whether training had been offered.[1]

This means that more than 60% of federal judges lack formal AI training, even as a majority of them are using AI tools in their judicial work. The gap between adoption and education is striking -- and it carries real risks.

Among the 38.9% of judges who recalled training being offered, a significant majority (73.8%) attended.[1] The appetite for education is there. The infrastructure to deliver it is not.

The policy vacuum

The policy landscape is equally fragmented:[1]

Chamber PolicyPercentage
Permit and encourage AI use7.4%
Permit AI use25.9%
Formally prohibit AI use20.4%
Discourage without formal prohibition17.6%
No official policy24.1%

Table 2: AI policies in federal judicial chambers. Source: Northwestern-NYC Bar Association Survey, 2026.[^1]

Nearly a quarter of chambers have no official policy on AI use. Combined with the 17.6% that discourage but don't formally prohibit, this means more than 40% of chambers lack clear AI governance. For litigators appearing before these judges, the uncertainty is palpable: you don't know whether the judge reviewing your motion used AI to research the legal issues, summarize the record, or draft preliminary analysis -- and there's no policy requiring disclosure.


The judicial sentiment divide

The Northwestern study captured something equally important: how judges feel about AI. The results reveal a judiciary almost perfectly divided between optimism and concern.[1]

The near-even split -- roughly 43% optimistic versus 42% concerned -- mirrors the broader professional divide within the legal industry. But several qualitative responses from judges bring the tension to life.

One judge reported discovering the hallucination problem firsthand: "My law clerk wrote a memo, then out of curiosity asked AI to write one. Of 11 cases AI cited, 10 were fake."[1]

Another judge saw transformative potential: "Summarizing trial transcripts and voluminous documents is a huge time saver."[1]

And a third captured the anxiety many feel: "Reports of zombie cases and AI conjuring law or facts is terrifying."[1]

Bankruptcy judges lead the way

One of the study's most interesting findings is the variation by judge type. Bankruptcy judges are significantly more likely to be regular AI users, with 32.2% reporting daily or weekly use, compared to 21.9% of magistrate judges and only 13.9% of district court judges.[1]

This pattern makes sense. Bankruptcy judges handle high volumes of document-intensive cases with structured legal issues -- exactly the workflow profile where AI tools deliver the most value. But it also suggests that as other judges encounter increasingly complex, data-heavy dockets, their adoption rates will likely follow the bankruptcy bench's trajectory.


From the bench to the courtroom: the standing order explosion

While judges are adopting AI in chambers, they're simultaneously creating a thicket of rules governing how litigators use it. Since Judge Brantley Starr of the Northern District of Texas issued one of the earliest AI-related standing orders, more than 300 federal judges have adopted some form of AI disclosure or certification requirement.[3][4]

The requirements vary dramatically:

Florida's aggressive approach

In January 2026, Florida's two largest judicial circuits issued sweeping AI disclosure orders. The Eleventh Judicial Circuit (Miami-Dade County) and the Seventeenth Judicial Circuit (Broward County) now require that any attorney or self-represented litigant using generative AI in preparing pleadings, motions, memoranda, responses, proposed orders, or other court records must:[5]

Notably, these requirements extend beyond briefs to include discovery materials such as deposition summaries -- a detail with significant implications for eDiscovery professionals.[5]

The mandatory hyperlink proposal

In what may be the most creative procedural response to the AI hallucination problem, Oliver Roberts, Editor-in-Chief of the National Law Review's AI & the Law Newsletter, has proposed a "Hyperlink Rule" requiring all electronic court filings to include functional hyperlinks connecting legal citations directly to authoritative sources.[6]

The logic is elegant: fabricated cases cannot be hyperlinked to real court opinions. A mandatory hyperlink requirement would make AI-generated hallucinations self-revealing at the filing stage, without requiring disclosure of AI use itself. The proposal specifies that links must connect to official government sources or "widely used and reputable legal research databases reasonably accessible to the Court," with exceptions for pro se litigants and genuine hardship cases.[6]

The proposal has gained traction. The New York Commercial Division adopted similar requirements as early as 2020, and several federal judges are reportedly evaluating hyperlink mandates as an alternative to the proliferating (and inconsistent) standing orders on AI disclosure.[6]


Jeffries v. Harcros: when AI meets the protective order

While the Northwestern study captures judicial adoption in chambers, a March 2026 ruling from the District of Kansas illustrates how courts are drawing lines around AI use in litigation workflows.

In Jeffries v. Harcros Chemicals Inc. (Nos. 25-2352-KHV-ADM, 25-2569-KHV-ADM, D. Kan. Mar. 25, 2026), Magistrate Judge Angel D. Mitchell confronted a question with direct implications for every eDiscovery team using AI-assisted review: can litigants upload discovery materials into open-source AI tools?[7]

The plaintiffs argued that restricting AI tool usage would increase litigation costs. The court was unpersuaded.

Judge Mitchell granted the defendants' motion to amend the protective order, prohibiting parties from uploading discovery materials into open generative AI systems. The reasoning centered on three key points:[7]

The Jeffries ruling draws a critical distinction that every litigation team should internalize: closed, proprietary AI systems are treated differently from open, public platforms. Using RelativityOne's aiR for Review or Everlaw's AI-assisted analysis within a controlled, access-restricted environment is fundamentally different from uploading production documents to ChatGPT -- and courts are starting to formalize that distinction in protective orders.[7]

For eDiscovery professionals, this means that the choice of AI platform isn't just a workflow decision -- it's a defensibility decision.


The adoption gap: bench vs. bar vs. eDiscovery

The Northwestern judicial survey arrives alongside other data that paints a picture of an industry in various stages of AI adoption -- with significant gaps between what different constituencies are actually doing.

The eDiscovery practitioner view

eDiscovery Today's 2026 State of the Industry Report, surveying 559 respondents, found that 60.7% of practitioners expect LLMs and GenAI to be transformative by the end of 2026. But only 17.7% are actually using GenAI in all or most of their cases.[8]

The gap between expectation and implementation is striking. As the report noted, the eDiscovery market is "not moving uniformly toward AI -- it is splitting," with practitioners who have adopted GenAI-assisted review already operating in a different pricing and workflow environment than those who haven't.[8]

The EDRM pricing survey

The Winter 2026 eDiscovery Pricing Survey, conducted by ComplexDiscovery in partnership with EDRM, confirmed this bifurcation. Law firms (43.4% of respondents) reported that generative AI-assisted review pricing models remain nascent, with outcome-based pricing still in early development. The survey captured a market at an inflection point where AI capability has outpaced the commercial frameworks needed to deliver it.[9]

Connecting the dots

Consider these three data points together:

ConstituencyAI Adoption RatePrimary Use
Federal judges61.6% have used AILegal research (30%)
eDiscovery practitioners17.7% using in most casesDocument review, ECA
Federal chambers (daily/weekly)22.4%Research and summarization

Table 3: AI adoption rates across legal constituencies in 2026. Sources: Northwestern-NYC Bar Survey, eDiscovery Today State of the Industry Report.[^1][^8]

Judges are adopting AI for research and summarization at a rate that exceeds eDiscovery practitioners' adoption of AI for review. This creates an asymmetry: the people evaluating your work product may be more experienced with AI than the people producing it.

For litigation teams, this has practical implications. A judge who has personally used AI to summarize a trial transcript understands both its strengths and its failure modes. That judge will have informed expectations about what AI-assisted review can and cannot reliably do. And that judge will have little patience for litigators who either overstate AI's capabilities or refuse to engage with the technology at all.


What the EDRM GenAI survey reveals about the practice gap

The EDRM GenAI Survey, whose results were presented in a March 30, 2026 webinar, added granularity to the adoption picture. When asked whether GenAI is already transformative in their practice, 23.3% of respondents said yes, while another 37.4% expect it to reach that status by the end of 2026.[8]

But the gap between perception and practice is significant. Document review led as the area with the biggest AI impact at 63% of responses, yet nearly a third of attendees pointed to early case assessment (ECA) and case strategy -- a signal that adoption is moving left in the discovery lifecycle, from review toward the earlier stages of case development.[10]

This leftward shift matters because it suggests AI is beginning to influence how litigation teams think about cases, not just how they process documents. When AI tools help identify key custodians, flag potentially privileged materials, or surface case themes during ECA, they're shaping litigation strategy itself -- not merely accelerating a production workflow.


The deepfake counterpoint: courts wrestling with AI-generated evidence

While most of the judicial AI conversation focuses on tools for legal work, courts are simultaneously confronting AI on the evidentiary side. On March 27, 2026, eDiscovery Today reported on a ruling where a court authenticated an audio recording while rejecting deepfake claims raised by the opposing party.[11]

This ruling sits at the intersection of two trends: the increasing sophistication of AI-generated content and the courts' evolving framework for evaluating its authenticity. As deepfake technology improves, litigators will face growing challenges in both presenting and challenging digital evidence. Courts that have personal experience using AI -- that 61.6% -- may be better equipped to evaluate these claims than courts that have never engaged with the technology.

The evidentiary implications extend to eDiscovery workflows. If AI-generated content is increasingly present in corporate communications (meeting summaries, email drafts, chatbot interactions), eDiscovery teams need protocols for identifying, flagging, and handling AI-generated materials during document review. The question is no longer whether AI-generated content will appear in productions -- it's whether your review workflow can distinguish it from human-authored material and assess its reliability.


Practical implications: what litigation teams should do now

The convergence of judicial AI adoption, proliferating standing orders, and evolving case law creates a clear imperative for litigation teams. Here's a practical framework for responding.

1. Know your judge's AI posture

Before every filing, check whether the assigned judge has:[3][4]

Resources like Bloomberg Law's AI Standing Order Tracker, Ropes & Gray's AI Court Order Tracker, and Law360 Pulse's AI Tracker maintain searchable databases of judicial AI orders.[3] This research should be as routine as reviewing a judge's past opinions on summary judgment standards.

2. Adopt a disclosure-by-default policy

Rather than navigating the patchwork of 300+ standing orders, consider adopting a firm-wide disclosure-by-default policy for AI use in court filings. The Florida circuits' approach -- disclosure on the face of the filing plus certification of independent verification -- provides a reasonable template.[5]

The practical benefit is consistency. A disclosure-by-default policy eliminates the risk of appearing before a judge whose standing order you missed, and it signals to the court that your AI use is transparent and well-governed.

3. Choose closed AI platforms for discovery

The Jeffries ruling makes the calculus clear: open AI platforms create defensibility risks that closed platforms do not. For eDiscovery work specifically:[7]

4. Bridge the training gap

If 45.5% of federal judges lack formal AI training, the percentage among litigation associates, paralegals, and eDiscovery analysts is likely comparable or worse.[1] Invest in structured AI training programs that cover:

5. Prepare for the hyperlink future

Whether or not the mandatory hyperlink rule is adopted broadly, the underlying principle -- that every citation should be independently verifiable -- is becoming a professional expectation. Consider adopting hyperlinked citations as standard practice now, before it becomes mandatory.[6]


The asymmetry that matters most

Here's the insight that I keep returning to after twenty years in this industry: the people evaluating legal work product are now more AI-experienced than many of the people producing it.

A bankruptcy judge who uses AI daily to summarize filings and research legal issues will have a fundamentally different perspective on AI-assisted work product than a litigation associate who has never used AI beyond basic research queries. That asymmetry creates both risks and opportunities.

The risk is obvious: litigators who present AI-assisted work without understanding its limitations may face scrutiny from judges who know exactly what those limitations are. The opportunity is subtler but more important: litigators who develop genuine fluency with AI tools -- who understand when AI excels, when it fails, and how to verify its output -- will be better aligned with a judiciary that is increasingly AI-literate.

The Northwestern study's most important finding isn't the 61.6% adoption rate. It's the 32.2% daily/weekly adoption among bankruptcy judges -- the canary in the coal mine for the rest of the federal bench. As dockets grow more complex, as document volumes continue to escalate, and as AI tools become more capable and more integrated into legal research platforms, the adoption curves for district court and appellate judges will inevitably follow.


Looking ahead: the judiciary as AI bellwether

The federal judiciary's embrace of AI -- quiet, uneven, and largely unregulated -- is both a mirror and a signal for the broader legal industry. It mirrors the same patterns we see in law firms and corporate legal departments: enthusiastic early adopters coexisting with cautious skeptics, fragmented policies struggling to keep pace with technology, and a persistent gap between what AI can do and what professionals are trained to do with it.

But it also signals something more important. When judges adopt technology, they develop expectations. Judges who have used AI to summarize trial transcripts will expect litigators to handle voluminous records efficiently. Judges who have used AI for legal research will recognize -- and scrutinize -- AI-generated analysis in briefs. And judges who have confronted AI hallucinations in their own chambers will have zero tolerance for fabricated citations in filings.

The litigation technology industry -- my industry -- needs to respond to this reality. We need eDiscovery platforms that are defensibly designed for judicial scrutiny. We need training programs that prepare litigation teams for a judiciary that is increasingly AI-literate. And we need governance frameworks that treat AI adoption not as an optional experiment but as a professional obligation.

The bench has spoken -- not through a single ruling, but through 112 survey responses that reveal a federal judiciary already living in the AI future. The question for every litigator, every eDiscovery professional, and every trial support team is whether they're ready to practice in front of judges who know what AI can do -- because those judges already know what it can't.


[1]Daniel Linna et al., "Artificial Intelligence in Federal Courts: A Random-Sample Survey of Judges," The Sedona Conference Journal, Volume 27 (2026). Survey reported by LawNext and Northwestern University.
[2]Northwestern University, "Federal Judges Report Broad Adoption of AI Tools," Northwestern Now (March 2026).
[3]Bloomberg Law, "Federal Court Judicial Standing Orders on Artificial Intelligence," comparison table (updated 2026). See also Ropes & Gray, AI Court Order Tracker.
[4]Law360 Pulse, "Tracking Federal Judge Orders On Artificial Intelligence," AI Tracker (updated 2026).
[5]The Florida Bar, "11th and 17th Circuits Order Disclosure, Certification of AI Use in Court Filings," Florida Bar News (January 2026). See also Esquire Deposition Solutions, Florida Trial Courts Demand Disclosure.
[6]Oliver Roberts, "Preventing Fabricated AI Legal Authorities: The Case for a Mandatory 'Hyperlink Rule,'" National Law Review (2026).
[7]Jeffries v. Harcros Chemicals Inc., Nos. 25-2352-KHV-ADM, 25-2569-KHV-ADM (D. Kan. Mar. 25, 2026). Reported by eDiscovery Today.
[8]eDiscovery Today, "2026 State of the Industry Report," surveying 559 eDiscovery practitioners. See also Nextpoint, analysis of the report.
[9]ComplexDiscovery and EDRM, "A Complete Analysis of the Winter 2026 eDiscovery Pricing Survey," ComplexDiscovery (2026).
[10]Array and Relativity, "The AI & Legal Tech Forecast 2026: What's Working Now -- And What's Next," Trust Array (March 2026).
[11]eDiscovery Today, "Audio Recording is Authentic, Rules Court, Rejecting Deepfake Claims," eDiscovery Today (March 27, 2026).