Back to Blog
Abstract visualization of autonomous AI agents processing discovery documents while the Rule 26(f) meet-and-confer framework lags behind
legaltech

When Agents Run Discovery: The Rule 26(f) Gap Nobody Mapped

April 20, 2026

Three federal magistrate decisions in eight weeks put autonomous AI agents at the heart of Rule 26(f). The disclosure framework has not caught up, and the gap favors whoever can deploy AI fastest while disclosing the least.

By Claude and Gemini with Sid Newby | April 2026

On March 30, 2026, a magistrate judge in Denver ordered a pro se plaintiff to disclose the name of every AI tool he had used on documents marked "CONFIDENTIAL" under the protective order.[1] Judge Maritza Dominguez Braswell did not compel him to turn over his prompts. She did not strip him of work product protection. She ruled, narrowly and well, that a litigant's mental impressions remain privileged. The identity of the AI platform that processed them does not. Ten days to comply. It is the first published federal order to treat an AI tool's presence in a discovery workflow as a disclosable fact. It landed in an employment discrimination case with a self-represented plaintiff. Not a ten-figure commercial matter. Not a Sedona-certified ESI protocol. The order is short. Its implications are not.


The Rule Nobody Updated

Federal Rule of Civil Procedure 26(f) was last substantively rewritten in 2015. The Advisory Committee finally conceded that ESI was going to need its own meet-and-confer discipline.[2] The amendments pushed parties to agree on preservation, form of production, and privilege protocols at the front of a case. Litigating those questions mid-review was a recipe for motions practice. It worked. Rule 26(f) conferences now hash out search terms, date ranges, custodian lists, and dedup rules. For the last decade or so, they have also addressed technology-assisted review and the terms under which it will be used.

The legal precedent for TAR disclosure runs through Da Silva Moore v. Publicis Groupe.[3] In 2012, Magistrate Judge Andrew Peck became the first federal judge to endorse predictive coding. The TAR era followed. The industry developed stable norms. A producing party discloses its intent to use TAR at the Rule 26(f) conference. The parties negotiate seed set disclosure and validation statistics. The court defers to methodology under Sedona Principle 6. Decisions like Hyles v. City of New York reinforced that courts would not compel a specific methodology, so long as the chosen approach produced defensible results.[4]

That equilibrium held for roughly a decade. It is no longer holding.

mermaid Diagram

Figure 1: Ten years of TAR precedent took roughly a decade to stabilize. Agentic AI has produced three major federal decisions in eight weeks.


What Changed Under the Hood

Technology-assisted review is a classifier. A lawyer trains it on a seed set. The model scores the remaining population. Humans review the output before production. The human is not optional. Every deployed TAR workflow — continuous active learning, simple passive learning, predictive coding 1.0 — ends with a reviewer making relevance and privilege calls. The machine surfaces. The human decides. That is why the TAR meet-and-confer conversation was tractable. Both sides knew what was being disclosed: a statistical methodology wrapping a human review process.

Agentic systems collapse the wrapper. An agent plans a task, breaks it into sub-tasks, runs them against unstructured data, and returns a result. In a document review context, that means an agent can read a production population. It can propose a privilege log. It can draft a responsiveness narrative. It can assemble a timeline of events across custodians. All of it happens without a human pressing "run" between steps. Exterro's April 2026 guidance positions agentic review as capable of reconstructing "the who, what, and when" of events autonomously.[5] Relativity's February 2026 toolkit categorizes agents by autonomy levels, from step-by-step human confirmation to end-to-end execution.[6] Every major platform now ships some version of this.

The cost pressure is real. Document review has historically consumed roughly three-quarters of total discovery spend.[5] Compress a 30-reviewer week into a 2-reviewer day and the economic argument is over before it starts. That is not what this post is about. This post is about what happens in the eight weeks between "we use an agent" and "we tell opposing counsel and the court we use an agent." Right now, those two events are not connected. No rule connects them. No ESI protocol clause connects them. No judicial opinion worth a damn connects them — with a handful of exceptions that arrived in the last sixty days.


The Three Cases That Changed Everything in Q1

In February and March of 2026, three federal magistrate judges issued opinions that together map the new disclosure terrain. They do not agree with each other. That is part of the point.

CaseCourtRulingKey Holding
Warner v. GilbarcoE.D. Mich., Feb. 10, 2026Work product preservedAI platforms are "tools, not persons"; disclosure to them is not disclosure to an adversary
United States v. HeppnerS.D.N.Y., Feb. 17, 2026No privilege, no work productPublicly available AI platform with disclosure-permissive TOS defeats any confidentiality expectation
Morgan v. V2X, Inc.D. Colo., Mar. 30, 2026Mental impressions protected; tool identity disclosableProtective order requires vendor contract prohibiting training, onward disclosure, and retention

Table 1: The three federal AI-in-discovery decisions that reset the Rule 26(f) conversation. Sources: [7][8][1]

Warner and Heppner were decided one week apart. They reached opposite conclusions on facts that look similar from a distance: a litigant using a generative AI chatbot in connection with a legal matter. Perkins Coie's analysis calls both rulings "ultimately consistent" because the outcomes track the differing terms of service and tools used.[7] Fine. For a litigator trying to build a practice norm, "ultimately consistent once you read six footnotes of contractual analysis" is not consistent. It is a split waiting to ripen.

Morgan matters most for Rule 26(f) purposes. Judge Dominguez Braswell did something the other two courts did not. She translated the abstract privilege question into an operational protective order. The order requires any AI tool used on confidential information to be bound by a vendor contract that does three things: (1) prohibits the use of inputs to train models, (2) restricts onward disclosure except as necessary for service delivery, and (3) permits deletion of data on demand.[9] No training. No disclosure. Deletion on demand. Those three conditions read like the first clause of every serious enterprise AI procurement contract written in the last eighteen months. That is the point. Judge Dominguez Braswell is not inventing anything. She is naming what reasonable parties have already done privately. She is making it a judicially enforceable floor.

Sidley Austin's April 2026 write-up is explicit about next steps. Parties should "address these issues directly in protective order negotiations before discovery begins." Not in informal side agreements. Sidley calls for a shift "away from general prohibitions toward more specific, operational requirements."[10] Good advice. It is also an indictment of how most ESI protocols are currently drafted.


The Rule 26(f) Gap

The specific gap is this. Rule 26(f) requires parties to develop a discovery plan. The plan must address ESI preservation and any claims of privilege or trial-preparation material protection.[2] The rule does not say how parties must address agentic AI. The rule does not know agentic AI exists. It says only to address "issues."

In practice, ESI protocols contain a section — typically titled "Technology-Assisted Review" or "Use of Analytics." It covers predictive coding, search term negotiation, sometimes email threading. What ESI protocols do not typically contain is any clause addressing the following:

Ask the average litigation support director when their firm last updated its ESI protocol template to address even one of those questions. The answer, in the firms I have seen, is "we have not." Some firms have paragraph-length "use of AI" clauses that were drafted in 2023 when the most aggressive use case on the table was GPT-4 summarizing hot documents. That is a different world.

mermaid Diagram

Figure 2: The decision path between a Rule 26(f) conference and a defensible agent-assisted production. Most firms have not drafted the protocol clauses needed to reach the green box.

The deeper problem is structural. Rule 26(f) assumes a kind of methodological transparency that agentic systems actively resist. When a party uses continuous active learning, opposing counsel can ask about seed set composition, responsive rate at the cutoff, and F1 score at production. Courts have sided with the request. Those numbers are auditable because the model is auditable in principle. An agentic system chains a planner, a retriever, a classifier, and a summarizer. It is auditable only if the vendor has built the telemetry. Some vendors have. Most have not. Vendors sell speed. The sales cycle rewards "two seconds per document" over "every decision is logged with the prompt, the context, and the model version."

That is the operational gap. The doctrinal gap is related but distinct. Even where the audit trail exists, no one has told litigants they need to disclose its existence at the Rule 26(f) stage. The rules of civil procedure are silent. Most local rules are silent. The model ESI protocols maintained by the Sedona Conference and by several federal districts predate the agentic shift.


What the ABA Already Said, and Why It Doesn't Close the Gap

The American Bar Association tried to get out ahead of this in July 2024. The Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512.[11] Opinion 512 is sober, careful work. It holds three things. First, Model Rule 1.1 requires lawyers to have "a reasonable understanding of the capabilities and limitations" of any GAI tool they use. Second, verification scales with the task: "a document review requires more independent review than work using the tool to generate ideas." Third, lawyers retain a non-delegable duty to supervise AI output under Rules 5.1 and 5.3.

Opinion 512 is a competence framework. It is not a disclosure framework. It tells you what you owe your client. It does not tell you what you owe the court or opposing counsel. The distinction matters. A party can be fully compliant with Opinion 512 — understand the tool, verify the output, supervise the review — and still produce a document set the opposing side cannot see into. That is the Morgan gap. Opinion 512 did not close it.

The Sedona Conference Working Group 13 is now entering its second year as a dedicated AI-and-law drafting body. WG13 held its annual meeting in Austin on April 8, 2026. The focus was "integrated, agentic systems capable of executing multi-step legal tasks" and governance frameworks for validation and defensibility.[12] WG13 drafting projects will likely produce the first serious industry consensus documents on agent disclosure. But consensus documents take eighteen to twenty-four months to move from inaugural meeting to public comment to published principles. The courts are moving faster.


The Meet-and-Confer Questions That Matter Now

For litigation teams drafting ESI protocols between now and the end of 2026, the following questions should be addressed directly at the Rule 26(f) conference. Not as a checklist. As the actual substance of the discovery negotiation.

1. Which AI tools are deployed in the production workflow, and at what autonomy level?

Relativity's autonomy taxonomy is a useful starting vocabulary. Level 1 agents require human confirmation between every step. Level 4 agents execute end-to-end without interruption.[6] The disclosure obligation should scale with autonomy. A Level 4 agent making privilege calls without human confirmation is functionally making the producing party's privilege determinations. That is not a methodology detail. That is a disclosure-triggering fact.

2. Does the vendor contract satisfy the Morgan conditions?

The three-part Morgan standard — no training on inputs, no onward disclosure, deletion on demand — is becoming the de facto protective order floor. Producing parties should be able to represent, on the record, that their tooling meets the standard. Receiving parties should be asking.

3. Is the decision trail reconstructable?

This is the hardest one. An agent surfaces a document as non-responsive, or as privileged, or as a timeline pivot. Can the producing party show its work? The prompt. The retrieved context. The model version. The classifier output that led to the decision. If the answer is no, the production is not defensible under Sedona 6. The tool is new. The defensibility principle is not. Courts have applied it since Da Silva Moore.

4. Has training data been scrubbed of inputs that could later surface as discoverable?

Consumer AI platforms — the kind that appear in a Heppner-style fact pattern — routinely retain inputs. Enterprise platforms typically do not. "Typically" is doing heavy lifting in that sentence. The Rule 26(f) conference is the forum to confirm what the vendor's retention policy actually says. Not what the MSA summary page claims it says.

5. What is the fallback when an agent errs?

Every deployed agent fails in ways a competent human reviewer does not. The failure modes are named: hallucinated privilege, fabricated citations, phantom custodian identification. They have produced over $145,000 in Q1 2026 sanctions.[13] The ESI protocol should contemplate error correction procedures. That means it should contemplate the possibility of error in the first place.


What a Plaintiff's Firm Should Say on the Record

Most commentary on AI in discovery is written from the defense bar's perspective. The reasons are both obvious and unlovely. Defense work pays better. Defense firms have in-house knowledge management teams. Defense work is where the large document populations live. The result is a discourse in which "how do we deploy agentic review efficiently" gets more airtime than "how does a plaintiff's firm with four lawyers and no litigation support staff evaluate the defendant's production methodology."

Here is what that plaintiff's firm should say at the Rule 26(f) conference, on the record, in every case where the defendant's production volume suggests agent-assisted review is likely.

First, ask whether agentic AI tools will be used at any phase of the production workflow — collection, privilege review, QC, any of it. Get a yes or no. Second, request an ESI protocol clause requiring disclosure of any agentic tool used, along with a representation that the vendor contract satisfies the Morgan conditions. Third, request that the production include, on request, documentation of the audit trail for any document categorized by an agent as non-responsive or privileged. Fourth, reserve the right to challenge the methodology under Da Silva Moore and its progeny if the production appears to systematically under-produce a category of document the plaintiff has reason to believe exists.

None of those four requests is controversial. All four are grounded in existing rule, existing precedent, or existing ethics opinion. The reason they are not already boilerplate is simple. Plaintiffs' firms are not yet asking. Defendants' firms have no incentive to offer. That will change when the first published order grants a spoliation motion on the basis of undisclosed agent use. The first such order is coming.

mermaid Diagram

Figure 3: Five areas where current ESI protocol templates most urgently need new language. Weighting reflects frequency of gap in protocols reviewed against the Morgan framework.


Who Gets Hurt When the Rules Don't Keep Up

Read this story as a procedural technicality and you will miss the part that matters. An ESI protocol update the AmLaw 100 sorts out among themselves over the next two bar conferences is not what these cases were actually about.

The Morgan plaintiff was pro se. The Warner plaintiff was pro se. The Heppner defendant was a criminal defendant. None of the three first-impression AI discovery courts got their facts from a sophisticated corporate party with a general counsel on the phone with Cleary Gottlieb. They got their facts from the people who could least afford the dispute. For them, the cost of figuring out whether an AI tool cost them their privilege was close to the whole economic stake of the case.

The disclosure asymmetry cuts against the smaller party. A large defendant deploying an agentic review system has in-house counsel, vendor relationships, and staff to draft a defensible-looking ESI clause. A plaintiff's firm trying to figure out whether the production is complete has none of that. The gap in Rule 26(f) practice is not neutral. It favors the party with more resources to deploy AI and fewer reasons to disclose what they deployed.

The fix is not exotic. It is a clause in the model ESI protocol that the Sedona Conference, the Northern District of California, or any other authoritative template custodian chooses to adopt. The clause requires disclosure of agentic tool use, representation of Morgan compliance, and reservation of the right to audit the decision trail. That clause, if adopted widely, would level the discovery field more than any individual piece of case law. It would also make agent-assisted review harder to sell as a black box to clients who cannot afford to understand what they are buying. Both outcomes are good. Neither is happening automatically.


What to Do Between Now and the Next Filing

For practitioners — and for the litigation support teams that actually draft ESI protocols before the lawyers sign them — the near-term work is straightforward and urgent. Update your firm's ESI protocol template to include an agent-disclosure clause. Inventory your existing vendor contracts for Morgan compliance before a court reads the contract first. On the plaintiff's side, start asking the disclosure questions at Rule 26(f) in every case that warrants them. On the defense side, be the firm that volunteers the answers before being asked. The first firm whose agent-driven production gets struck for non-disclosure will become a very expensive case study very quickly.

The larger work will take years. Federal rule amendments. Sedona principle updates. Model ESI protocol revisions. That is the nature of rule-making. In the meantime, the cases are proceeding. The agents are running. The productions are being made. The Rule 26(f) conferences are happening tomorrow morning. Most of them will not touch any of this. The templates were written for a world where the machine waited for the human to press "run." That world is over. The rules have not caught up. The conference is an hour long. It is the only time the parties will talk about any of this before the production is sealed. Use it.


Related Reading


[1]Morgan v. V2X, Inc., No. 1:25-cv-01991-SKC-MDB (D. Colo. Mar. 30, 2026), discussed in Morgan v. V2X Decision Signals a Turning Point for AI Data Privacy (Everlaw).
[2]Federal Rule of Civil Procedure 26, Duty to Disclose; General Provisions Governing Discovery (Cornell Legal Information Institute).
[5]Agentic AI Transforms eDiscovery Document Review (eDiscovery Today, April 2026), quoting Exterro guidance.
[6]Relativity agent autonomy framework, February 2026, summarized in When Agents Act: The Rule 26(f) Disclosure Threshold for Agentic AI in eDiscovery (ComplexDiscovery).
[10]Sidley Austin, supra note 9.

Related Posts