Back to Blog
Abstract geometric visualization of an AI neural network classifying and sorting document nodes with approval and rejection indicators
legaltech

From TAR to GenAI: How Acquisitions, Agentic AI, and 96% Recall Are Rewriting the Rules of Document Review

April 2, 2026

HaystackID's acquisition of eDiscovery AI, DISCO's agentic Cecilia platform, and Relativity's GenAI-standard cloud push signal a tectonic shift: generative AI is replacing TAR as the backbone of document review, with recall rates above 90% and review speeds measured in millions of documents per day.

By Sid Newby | April 2026

I've spent more than twenty years watching document review evolve -- from rooms full of contract attorneys billing by the hour, to predictive coding workflows that felt revolutionary when they first arrived, to the current moment where generative AI is dismantling assumptions I once thought were permanent features of litigation. In the span of just a few months in early 2026, three moves have crystallized a transformation that's been building for two years: HaystackID acquired eDiscovery AI to operationalize GenAI across its entire service stack, DISCO launched the industry's first scaled agentic AI tool capable of processing millions of documents autonomously, and Relativity made its GenAI review tools standard for every cloud customer. These aren't incremental upgrades. They're the opening moves in a wholesale replacement of the technology-assisted review paradigm that has defined eDiscovery for the past decade. If your litigation team is still debating whether to pilot GenAI, the market has already moved past you.


The end of the TAR era

Technology-Assisted Review -- TAR -- has been the workhorse of large-scale document review since the landmark Da Silva Moore v. Publicis Groupe decision in 2012 established its legal acceptability.[1] For over a decade, TAR 1.0 (simple passive learning) and its successors TAR 2.0 and 3.0 (continuous active learning, or CAL) represented the state of the art. They were a genuine leap over manual review: faster, cheaper, and in many cases more consistent than human reviewers working through hundreds of thousands of documents.

But TAR has always had structural limitations that practitioners learned to live with. The typical TAR workflow requires a seed set -- a human-coded sample of documents that trains a classification model. Building that seed set takes time, expertise, and iterative refinement. Recall rates in production TAR deployments have generally hovered in the 70-80% range, with some well-tuned implementations reaching 90%.[2] Precision, meanwhile, varies wildly depending on the richness of the document population and the quality of the training data.

The economics of TAR also embed a hidden cost: the subject matter expertise required to train the model. Senior attorneys must spend hours coding documents, refining issue tags, and validating statistical samples before the system can run autonomously. For matters with fewer than 100,000 documents, the overhead of a TAR workflow often makes it uneconomical compared to brute-force linear review.

Generative AI changes all of this -- not by tweaking the TAR model, but by replacing its foundational architecture entirely.

TAR vs GenAI review pipeline comparison

Figure 1: The shift from seed-set-trained TAR workflows to instruction-driven GenAI review pipelines.


How GenAI review actually works

Where TAR requires training data, generative AI review requires instructions. Instead of coding a seed set, a review manager provides natural language descriptions of what constitutes a responsive document, a privileged document, or any other classification category. The large language model then applies those instructions across the entire document population, generating not just a classification decision but a narrative justification for each call.[3]

This is a profound workflow change. Consider the practical implications:

The speed differential is staggering. DISCO's Cecilia Auto Review processes documents at up to 32,000 documents per hour, with customer adoption growing over 300% since September 2024.[4] Relativity's aiR for Review has been used by over 200 customers to review approximately 25 million documents across thousands of matters, with some teams processing up to 3 million documents per day.[5] A ComplexDiscovery analysis found that for a sample set of 130,000 documents, GenAI review completed in approximately 5 days compared to 10 days for TAR 1.0 and 18-20 days for TAR 2.0/3.0 -- while manual review took 27 days.[2]

Review MethodTime (130K docs)Recall (typical)Setup RequirementsPer-Document Cost
Manual Review27 daysVaries (60-80%)Staffing, trainingHighest
TAR 1.010 days70-80%Seed set, SME codingModerate
TAR 2.0/3.0 (CAL)18-20 daysUp to 90%Iterative trainingModerate
GenAI Review~5 days90-98% (refined)Natural language instructions60-70 cents/doc

Table 1: Comparative performance of document review methodologies. Sources: ComplexDiscovery,[^2] eDiscovery AI.[^6]


The accuracy question: settled science or ongoing debate?

The most important question for any litigation team evaluating GenAI review is accuracy. Here, the data is increasingly compelling -- though the nuances matter.

eDiscovery AI -- the company now acquired by HaystackID -- has published benchmarks showing their platform regularly achieves recall rates of 96-98% with precision exceeding 96%.[6] A specific case study demonstrated a recall rate of 99.01% and precision of 96.57%, both significantly surpassing standard benchmarks for any review methodology.[6] These numbers dwarf the typical TAR target of 70-75% recall.

Relativity's aiR for Review has shown similarly impressive results in production deployments. Purpose Legal, one of Relativity's customers, completed a 300,000-document review in one week, achieving an 85% reduction in review time. Users of aiR for Privilege have reported up to 80% reductions in review time while identifying thousands of previously missed privileged documents -- a finding that should alarm anyone who assumed their prior TAR-based privilege reviews were comprehensive.[5]

However, the story isn't entirely one-sided. The ComplexDiscovery analysis found that GenAI review achieves approximately 70% recall "right out of the box" -- comparable to TAR 1.0 -- and reaches the 90%+ range only after iterative refinement of prompts and instructions.[2] This is an important caveat: GenAI review is not a magic wand. The quality of the output depends heavily on the quality of the instructions, the complexity of the issues, and the review team's willingness to iterate.

The critical insight is that GenAI review's accuracy ceiling is substantially higher than TAR's, even if its accuracy floor is comparable. With proper prompt engineering and quality control, GenAI consistently outperforms TAR across every metric. But achieving those results requires expertise -- just a different kind of expertise than TAR demanded.


The HaystackID-eDiscovery AI acquisition: build vs. buy in real time

On February 26, 2026, HaystackID announced its acquisition of eDiscovery AI, a deal that signals how seriously the managed services market is taking the GenAI transition.[7]

The acquisition wasn't speculative. HaystackID CEO Chad Pinson described it as a response to direct client demand: "Our clients are asking for easy-to-deploy GenAI capabilities that deliver deep insights, defensible results" with adaptability to evolving use cases.[7] Jim Sullivan, eDiscovery AI's CEO, noted that the acquisition "formalizes our collaboration that has proven its value in production environments" -- indicating that the two companies had already been working together before the deal closed.[7]

What makes this acquisition particularly noteworthy is the operating structure. HaystackID will maintain eDiscovery AI as a separate business entity, continuing to serve existing clients who prefer operational separation. This dual-entity model acknowledges a market reality: some clients want GenAI capabilities embedded in their managed review workflows, while others -- particularly those with platform flexibility requirements or concerns about vendor lock-in -- want to engage eDiscovery AI's technology independently, including within Relativity and other eDiscovery environments.[7]

Michael Sarlo, HaystackID's Chief Innovation Officer, framed the deal in terms of market maturation: clients are no longer just testing GenAI -- they're "operationalizing" it across early case assessment, investigations, and regulatory response.[7] The acquisition is a build-versus-buy decision resolved in favor of buying, and it won't be the last. Expect more managed service providers to acquire or partner with GenAI-native companies as the technology moves from pilot programs to production workflows.


DISCO's agentic bet: autonomous review at scale

While HaystackID went the acquisition route, DISCO has been building internally -- and their February 2026 announcement represents perhaps the most ambitious vision for AI-powered review in the market.[4]

DISCO's enhancement to its Cecilia AI platform introduces what the company calls "the industry's first scaled agentic AI tool for fact investigation and e-discovery."[8] The distinction from other GenAI review tools is architectural: Cecilia's agentic capabilities are designed to handle entire litigation matters rather than individual documents. The system is described as a "deep-thinking, autonomous, multi-step reasoning agent" that assembles the broad facts of a matter and then goes deeper to identify connections and relationships across massive evidence sets.[8]

This is a fundamentally different proposition from document-level classification. Traditional review -- whether manual, TAR, or basic GenAI -- treats each document as an independent unit. Cecilia's agentic approach treats the entire document population as an interconnected dataset, identifying patterns and relationships that no document-by-document methodology could surface.

The pricing model is equally disruptive. DISCO announced that agentic capabilities will be available at no additional cost to existing customers.[4] In an industry where AI features have typically been priced as premium add-ons -- sometimes costing 60-70 cents per document for GenAI review[2] -- DISCO's decision to bundle agentic AI into its base platform pricing is a direct challenge to competitors who have been monetizing AI as a separate revenue stream.

CEO Eric Friedrichsen has emphasized that the enhancement provides "more detailed analysis" for large-scale matters, while Chief Product Officer Richard Crum has drawn a distinction between DISCO's approach and competitors by focusing on data volume processing capability -- targeting "large, complex workflows" rather than discrete document-level tasks.[4]

The market response has been significant: a 2025 DISCO study found that 72% of legal professionals expect generative AI adoption within 12 months, with 35% already using it.[4] Cecilia's customer adoption has grown over 300% since September 2024, suggesting that the agentic approach is resonating with practitioners who have outgrown basic AI-assisted workflows.


Relativity's cloud play: GenAI as table stakes

The third pillar of the 2026 GenAI review revolution comes from the industry's dominant platform. At Relativity Fest 2025, Relativity announced that its generative AI solutions -- aiR for Review and aiR for Privilege -- would become standard features in RelativityOne, its cloud platform.[5]

This is a seismic decision. Relativity is the infrastructure layer for the majority of large-scale eDiscovery, and by making GenAI review standard rather than optional, the company is signaling that generative AI is no longer a premium feature -- it's the baseline expectation for modern document review.

The move also reinforces Relativity's cloud-only trajectory. By bundling aiR tools exclusively into RelativityOne (the cloud offering), Relativity creates another powerful incentive for the remaining on-premises holdouts to migrate. The message is clear: if you want GenAI review capabilities, you need to be in the cloud.[9]

Phil Saunders, Relativity's CEO, stated the rationale plainly: "We believe generative AI is the undeniable future of review, and we're making it easy for all RelativityOne customers to experience the tremendous benefits it offers."[5] The company also launched Rel Labs, an innovation hub designed to accelerate legal technology transformation, and has seen strong early adoption of aiR for Case Strategy, with over 40 customers analyzing more than one million documents since its launch.[5]

The aiR ecosystem expanding

Relativity's GenAI suite now encompasses multiple interconnected tools:

The breadth of this suite illustrates a key trend: GenAI in eDiscovery is not limited to document classification. It's expanding upstream (early case assessment), downstream (case strategy and trial preparation), and laterally (privilege review, which has historically been one of the most expensive and error-prone phases of litigation).


What this means for litigation teams: five practical implications

1. The TAR-to-GenAI migration is no longer optional

With all three major market players -- Relativity (platform), DISCO (technology), and HaystackID (managed services) -- committing to GenAI as their primary review technology, litigation teams that continue to rely exclusively on TAR are falling behind a rapidly moving standard of care. This doesn't mean TAR disappears overnight, but the trajectory is clear.

2. Review expertise is being redefined

The most valuable skill in document review is shifting from statistical sampling and seed set construction to prompt engineering and instruction design. Review managers who can write clear, precise natural language instructions will produce better results than those who are expert at training TAR models. Law firms and service providers need to invest in this new competency now.

3. Privilege review is the immediate high-value target

The data from Relativity's aiR for Privilege deployments -- identifying thousands of previously missed privileged documents -- should be a wake-up call. If GenAI is finding privileged documents that prior review methodologies missed, the implication is that prior productions may have included inadvertently produced privileged material. Litigation teams should consider running GenAI privilege checks against previously reviewed datasets as a risk mitigation measure.

4. Pricing pressure will accelerate

DISCO's decision to include agentic AI at no additional cost puts pressure on every competitor that currently charges separately for AI features. Expect Relativity's decision to make aiR standard to trigger similar moves across the market. The per-document economics of review will continue to decline, which is good news for litigation teams and their clients.

5. Defensibility frameworks need updating

The legal standards for accepting AI-assisted review were developed in the TAR era. Courts approved TAR based on statistical validation -- recall rates, precision metrics, and sampling protocols. GenAI review produces different kinds of evidence of quality: narrative justifications for each decision, instruction logs, and iterative refinement histories. The eDiscovery community needs to develop updated defensibility frameworks that courts can evaluate, and practitioners who do this work early will have a significant advantage.[10]


The access to justice dimension

There's a dimension to this transformation that doesn't get enough attention: access to justice.

For decades, document review has been one of the most expensive components of civil litigation. The cost of putting human eyes on hundreds of thousands of documents has effectively priced many litigants -- individuals, small businesses, underfunded public interest organizations -- out of the system. They either couldn't afford to bring meritorious claims or were forced to settle because the cost of discovery would exceed the value of the dispute.

GenAI review changes this calculus fundamentally. When a single technology platform can process 3 million documents per day at quality levels exceeding human review, the cost barrier drops by orders of magnitude. A plaintiff's firm handling a consumer protection case against a major corporation can now conduct the same quality of document review that, five years ago, was available only to AmLaw 100 firms with seven-figure discovery budgets.

This is not a theoretical benefit. It's already happening. The combination of all-inclusive pricing models (DISCO), GenAI-standard platforms (Relativity), and operationalized managed services (HaystackID/eDiscovery AI) is creating an ecosystem where sophisticated document review is accessible to a much broader range of practitioners and parties than ever before.


Looking ahead: the next twelve months

The GenAI document review revolution is moving faster than most practitioners appreciate. Within the next twelve months, I expect to see:

The question for litigation teams is no longer whether to adopt GenAI review, but how to implement it effectively -- and how to ensure that the transition from TAR to GenAI is managed in a way that maintains defensibility, controls cost, and leverages the technology's full potential.

Twenty years ago, the shift from paper to electronic discovery felt seismic. A decade ago, the shift from manual review to TAR felt transformative. The shift from TAR to GenAI is at least as significant as either of those transitions, and it's happening in a compressed timeframe. The firms and teams that move now will define the standard of practice for the next decade. The ones that wait will be playing catch-up in a market that has already moved on.


[1]Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012), established TAR as an acceptable methodology for document review in federal litigation. Case reference via Sedona Conference
[2]ComplexDiscovery, "eDiscovery Review in Transition: Manual Review, TAR, and the Role of AI" -- comparative analysis of review methodology speed, cost, and accuracy metrics. ComplexDiscovery
[3]Alvarez & Marsal, "From TAR to GenAI: Rethinking eDiscovery with Large Language Models" -- analysis of instruction-driven review workflows replacing seed-set-trained models. Alvarez & Marsal
[4]LawNext, "DISCO Launches Scaled Agentic AI Tool for Large Discovery and Fact Investigation Matters" (February 2026) -- DISCO's agentic Cecilia AI announcement, pricing model, and adoption metrics. LawNext
[5]eDiscovery Today, "Relativity Announces Generative AI Solutions to Be Standard in Cloud Offering" (October 2025) -- Relativity's decision to make aiR tools standard in RelativityOne, performance metrics, and adoption data. eDiscovery Today
[6]eDiscovery AI, "Evaluating Performance in eDiscovery Predictive Coding" -- benchmark data showing 96%+ precision, 98% recall, and case study achieving 99.01% recall. eDiscovery AI
[7]HaystackID, "HaystackID Acquires eDiscovery AI to Accelerate Client-Driven GenAI Workflows" (February 26, 2026) -- acquisition announcement, operating structure, and executive commentary. HaystackID
[8]DISCO, "C-Suite Q&A -- A Deeper Dive on DISCO's Agentic AI" -- technical details on agentic Cecilia capabilities and architectural approach to matter-level analysis. DISCO Blog
[9]Relativity Blog, "The New Review: Mapping the Evolution of TAR, Generative AI, and the Attorney's Role in e-Discovery" -- Relativity's perspective on the TAR-to-GenAI transition and cloud strategy. Relativity Blog
[10]eDiscovery AI, "GenAI Review and Legal Standards for Acceptance" -- analysis of emerging defensibility frameworks for generative AI-assisted review. eDiscovery AI