Everyone’s talking about whether AI will replace lawyers. That’s the wrong question. The real disruption isn’t coming from general-purpose tools like ChatGPT. It’s coming from legal LLMs, large language models trained specifically on law.
You’ve probably heard it in the courtroom, at conferences, or over drinks with colleagues. There is much speculation about what AI might do to the profession in five or ten years.
But after working directly on the technology side of legal innovation, I’ve realized that most lawyers, brilliant as they are, don’t fully grasp what AI is, how it works, or what it means for the future of law.
The truth is, we’re not even asking the right question. The real issue isn’t whether AI Agents will replace lawyers. The real question is when the first legal LLM will be coming. My company, Caseway, is working on the first LLM in the industry.
There’s been a tendency to lump all AI into the same category. A lot of people assume software like ChatGPT or Copilot are representative of what legal technology can do. But those weren’t built for law. They were built to answer questions about anything, from recipes to resumes. They use scraped data from the entire internet. That’s not legal reasoning. That’s autocomplete on steroids.
Access to data is important for legal AI
If we’re going to build something meaningful for law firms, it has to be done differently. You need accurate legal data, millions of court decisions, tribunal rulings, and administrative findings. You need a way to collect, clean, and structure that data at scale. And you need something most AI companies don’t have: a legal model that understands how law is written, interpreted, and applied.
Building a legal LLM is about access to the correct data. That became clear to us when CanLII sued Caseway. The issue was about whether we used their headnotes and work products. We voluntarily turned over our database to CanLII, and they couldn’t find any data they said was copyrightable.
But the lawsuit, in a strange way, confirmed something we’d suspected for a while: whoever builds a legal LLM first, one trained specifically on law, will fundamentally change the game.
That’s where everything is headed. Not search engines. Not legal chatbots. It’s legal LLMs.
Most people still don’t understand that General-purpose models like GPT and LexisNexis (which, according to their website, uses ChatGPT) are capped. You can dress them up with prompts and wrappers, but at the core, they don’t understand law.
They don’t recognize the difference between persuasive authority and binding precedent. Unless you manually force them not to, they’ll treat a blog post the same way they treat a Supreme Court ruling. That’s not scalable. And it’s not reliable.
I am building the first legal LLM
With a legal LLM, you can do so much more. You can build software that draft contracts with actual jurisdictional logic. You can create systems that compare clauses for both similarity and enforceability. You can offer the average person access to valuable legal information, not just vague summaries and disclaimers. That kind of model opens doors we haven’t even walked through yet.
But here’s the part law firms need to hear: if your practice relies heavily on hourly research, repetitive drafting, or explaining basic legal principles, artificial intelligence will start eating into that work. Not because it’s perfect, but because it’s fast, good enough in many cases, and increasingly cheap. Clients aren’t dumb. They’re already asking about alternatives. And they’re not asking their lawyers. They’re asking Google and ChatGPT.
That doesn’t mean we’re heading for a world without lawyers (at least not for a few decades.) It means we’re heading for a world where the best lawyers have superpowers, and everyone else risks being left behind.
Lawyers Should Be Using AI
Previously, you hired a junior to summarize 10 cases and pull relevant statutes. A machine can now do that same task in seconds. But interpretation, judgment, and strategy still belong to you.
Some lawyers will adapt. They’ll use these types of AI software to move faster, make better arguments, and serve more clients at a lower cost. Others will dig in their heels. They’ll tell themselves nothing has changed until it has. I would argue that everything has already changed.
The legal profession doesn’t need to be afraid of AI. But it needs to stop pretending the tech isn’t authentic or won’t affect them. Because it already is. Caseway is already here. And while the courtroom isn’t going away, how we prepare, advise, and deliver legal services is.
The courtroom isn’t disappearing. But the way we prepare, argue, and win…That is being rewritten in code.
Alistair Vigier is the CEO of Caseway, a legal technology company in the AI space.
It’s like what happened in accounting—QuickBooks didn’t eliminate accountants, it just made them faster and forced them to level up. Same thing here. The lawyers who resist change will feel the squeeze. The ones who lean into it? They’ll be running leaner firms with happier clients and better margins.
Lawyers are in biiiiiiiigggggg trouble lol… They have been treating people like crap for so long.
As a (now retired) lawyer of 35 years experience, I fully support Alistair’s point that they key to a meaningful legal online research tool will be that it is grounded on a “legal model that understands how law is written, interpreted, and applied”. I’m looking forward to giving this a spin sometime!
Great! Let me know what you think.
I’m delighted to read this. I’ve long thought that LLMs need to be not only more industry specific, but even more entity specific. A law firm should be able to decide precisely what data an LLM should be trained on, even down to its own knowledge bank. I will be very interested to see how this develops. Good luck Alistair!