Product

April 28, 2026

Most AI adoptions in disputes practices stall for three documented reasons. Learn the strategy-first approach that separates firms achieving outsized ROI from those stuck watching demos.

AI for Law Firms: A Practitioner's Guide to Getting It Right

You are at capacity. You are not willing to hire fast. You have watched AI for law firms vendors make promises that do not survive contact with a real matter. And you are still being asked, by managing partners and clients alike, when you are going to "do something with AI."

Here is the short answer, and the thesis of this entire series: AI can scale your practice without compromising your hiring discipline or your proximity to documents. But most adoptions stall for three repeatable, documented reasons. Avoiding them does not require a purchase decision. It requires a posture. Three things you can do this week, before evaluating a single tool. First, identify the one task in your practice that consumes the most associate time but requires the least legal judgment. Second, write a one-page AI usage policy for your team, even a rough one. Third, run a structured test on a sample of your own documents before committing to any vendor. The nine posts in this series turn each of those actions into a full chapter.


How Fast Is the Legal AI Market Actually Moving?

By 2026, roughly 86% of midsize law firms report using AI tools in some form, according to Clio's Legal Trends for Mid-Sized Law Firms report. Around 63% report having formally adopted generative AI, according to the same Clio report. That number has moved fast. Alongside it, vendor noise has compounded at roughly the same rate. Legal AI tools priced for large firms are being marketed to midsize practices with a different cost structure and a different risk tolerance.

The result is a market that looks busy but produces a lot of failed or abandoned implementations. Partners who have been burned once tend to exit the conversation entirely. That is the wrong response. But it is an understandable one.


Why Most AI Adoptions in Law Firms Stall

The research is consistent across multiple published surveys. Three causes account for the majority of failures.

First, the economic model creates friction for some firms. In practices running on the billable hour, efficiency gains reduce revenue unless pricing is restructured. Partnership compensation tied to individual hours creates a financial disincentive for any legal AI tool that actually works. That friction is real and has to be named before it can be managed. But it is not universal. Many disputes boutiques and midsize practices already operate on fixed fees or capped engagements. For those firms, AI efficiency translates directly into margin improvement. No pricing restructure required. The billable-hour problem is a reframing challenge, not a reinvention of the business model.

Second, accuracy anxiety is not irrational. A 2025 empirical study by Stanford RegLab (Magesh et al.) tested one leading legal AI platform on real queries. It was accurate in only 65% of queries and produced hallucinations in more than 17% of cases. Independent reporting through April 2026 puts the count of court and tribunal decisions worldwide that have confronted AI-generated hallucinations in legal filings at over 1,100. For a disputes partner whose name is on the work product, this concern is well-founded. The answer is not to avoid legal AI tools altogether. It is to select tasks where the cost of an error is low and verification is fast.

Third, change management is treated as an afterthought. Industry surveys covering 2025-2026 indicate that more than half of firms had still not provided any training on the responsible use of generative AI. The firms that fail typically deploy a tool before building any internal consensus or usage policy. The tool sits unused. The vendor gets blamed. The real cause was sequencing.

What the Successful Minority Is Doing Differently

The counterpoint matters, and it should be stated plainly. A strategically-minded minority of firms are achieving significant, measurable results. Thomson Reuters' 2026 ROI reporting indicates that firms with a formal AI strategy see roughly 3.9 times higher ROI than those without. Around 65% of midsize firms report handling greater work volume without adding headcount, according to the same Clio report. About 38% of attorneys report saving one to five hours per week using AI tools, according to the same Thomson Reuters reporting.

These firms did not have better legal AI software. They had a clearer plan, tighter use-case selection, and governance in place before rollout. Document-intensive tasks, specifically sorting productions and building chronologies, were the proving ground. Their success is replicable. That is what this series documents.


What This Series Is and Is Not

Nine posts. Written for practice heads.

Professional duties around AI competence, confidentiality, and oversight are converging across jurisdictions. In the US, ABA Formal Opinion 512 established that existing duties of competence, confidentiality, and supervision apply directly to AI-generated work product. The supervising attorney is responsible.

European practices face an equivalent and, in some respects, more binding framework. The CCBE guide on generative AI use by lawyers warns explicitly against entering client data into generative AI prompts without robust safeguards. The Law Society of England and Wales requires human verification and oversight as an ongoing operational requirement. France's CNB has issued a detailed guide on professional secrecy and human oversight requirements. Germany's BRAK mandates that AI serve only as an auxiliary tool, with the lawyer retaining final control over all outputs. Layered over all national obligations is the EU AI Act (Regulation EU 2024/1689), which classifies law firms as "deployers" with enforceable compliance obligations that began phasing in from 2025. A UK High Court ruling in June 2025 cited dozens of fake case-law citations and warned senior lawyers directly about AI misuse. The jurisdictional details differ. The direction is the same: the lawyer signing the work product is accountable for the AI output it contains.

This series is written for that person.

Here is what each post covers:

  1. Why this series exists (this post) -- The three failure causes, the three actions, and the promise of the series.
  2. The economic model problem -- How to restructure the value conversation inside your own firm before any tool touches a matter. This post maps the billable-hour incentive problem and offers concrete language for reframing AI as a capacity investment rather than an efficiency project.
  3. Choosing the right first use case -- How to identify the task with the highest leverage and the lowest risk. Document-heavy disputes work is typically the best starting point. You will leave with a one-page scoring framework for ranking candidate tasks inside your own practice.
  4. Building your AI usage policy -- A one-page policy framework any practice head can adapt in an afternoon, before any vendor conversation.
  5. What to demand from a vendor -- The five questions that separate serious legal AI tools from hype-priced ones, and how to read the answers.
  6. Change management for legal teams -- How to sequence adoption so that resistance drops before rollout begins, not after.
  7. Supervising AI output -- What ABA Formal Opinion 512 and its international equivalents actually require, in plain language, and how to build a review protocol your team will use.
  8. Measuring what matters -- Which metrics are worth tracking and which ones vendors use to obscure weak performance.
  9. Building toward scale -- How the firms succeeding at AI for law firms today are positioning for a market where AI fluency is an expectation, not a differentiator.

The Three Actions You Can Take This Week

Each deserves a post of its own. Here is enough to start now.

Action One: Identify Your Highest-Leverage, Lowest-Judgment Task

In most disputes practices, this is some variant of document sorting, chronology building, or first-pass review of productions. The task has to meet two criteria: it consumes real associate time, and a wrong answer is catchable before it damages anything.

Do not start with legal research. Do not start with drafting. Start with the task where human review of the output takes five minutes, not five hours. Write it down on one page. That page is your AI pilot.

Action Two: Write a One-Page Usage Policy, Even a Rough One

The guidance across jurisdictions is clear: existing professional duties apply. Competence means understanding what the legal AI tool is doing. Confidentiality means knowing where your data goes. Supervision means reviewing every output before it goes to a client or a tribunal.

A policy does not need to be long to be useful. It needs to answer three questions. What tasks may lawyers on this team use AI for? What data may they upload? Who reviews the output before it leaves the firm? One page. Written this week. Revised later.

Action Three: Run a Structured Test Before Committing to Any Vendor

Do not ask for a documented accuracy rate. Vendors do not publish these, and the numbers that do surface rarely come from tests on documents that resemble your work. There are four things you can actually verify.

Run a test on a sample of your own documents before signing anything. Provide a set of documents from a closed matter and evaluate the output yourself against what you know the correct answer to be. That is the only benchmark that matters for your practice.

Ask where your data is stored and whether it is used to train models. This question has a yes or no answer. If the vendor cannot give one in writing, you have your answer.

Ask for a written description of the quality control process and who is accountable inside their organization. Ask how the vendor identifies and remediates errors in model output.

Ask whether the tool has been tested specifically on disputes or arbitration documents, not transactional work. Contract review and document-intensive litigation review are different tasks. A tool optimized for one does not transfer cleanly to the other.


Why Document-Heavy Disputes Work Is Often the Best Starting Point

Disputes and arbitration practices carry a structural advantage when adopting AI for law firms. Document volumes are high, tasks are repeatable, and the performance bar for a first pass is: faster than your best associate and reviewable in minutes, not days. That gap is where AI earns its keep. Human review remains mandatory. The point is that the review time compresses dramatically.

The firms seeing the clearest early wins are starting with document-intensive tasks: sorting productions, building chronologies, flagging relevant passages in large bundles. These tasks consume serious associate hours. They require attention, not judgment. And the output is always reviewed before it matters. That combination, high volume, low judgment threshold, mandatory human review, is the right starting condition for any legal AI tool adoption.

At Kallam, we built around this observation. The first capability we designed was document processing for disputes workflows: automatic date extraction, title generation, and chronology building the moment documents are uploaded. No prompting. No manual intervention. The task that typically takes an associate hours on the first day of a matter is handled in minutes. That is the kind of result that builds internal confidence for broader adoption.


If You Read Nothing Else from This Post, Read This

Most AI adoptions in disputes practices stall for three reasons. The billing model creates friction for hourly firms, and an unambiguous margin win for fixed-fee ones. Accuracy concerns are grounded in real empirical data from 2025. And change management is sequenced backwards. A small but growing group of firms is proving these barriers are surmountable with a strategy-first approach. This series exists to document exactly how. You do not need to buy anything to start. You need a use case, a policy, and a way to test a vendor. Those three things take less than a week.

If you want a sounding board for your own situation, we are happy to talk. Start a conversation here.