
Your Lawyers Are Already Using AI. The Question Is Whether Your Clients' Data Is Safe.
What Ontario law firms need to know about AI privacy risks — and the solution most IT vendors aren't talking about.
Picture this: it's a Tuesday afternoon and one of your associates is preparing for a discovery hearing. She has forty pages of witness statements to synthesize and two hours to do it. She opens ChatGPT, pastes in the documents, and asks for a summary. Three minutes later, she has exactly what she needs.
She feels productive. Resourceful, even.
What she doesn't realize — and what most managing partners don't either — is that those witness statements, with your client's name, the details of their case, and information they shared with your firm in confidence, just left your building. They're now sitting on a server operated by a company headquartered in San Francisco, governed by a terms of service agreement that very few lawyers have ever read, processed in ways that aren't fully disclosed, and potentially used in ways you never consented to.
This isn't a hypothetical risk. It's the default outcome of AI tools most of your team is already using.
The Problem Isn't That Your Team Uses AI. It's That You Probably Don't Know How They're Using It.
In 2023, Samsung allowed its engineers to use ChatGPT to help with coding tasks. Within twenty days, three separate incidents had occurred. Engineers had pasted proprietary source code into the tool. Internal meeting notes had been uploaded and summarized. Sensitive technical data had been shared with a system that retains user inputs to train itself further. Samsung moved quickly to restrict access — but the data was already gone, effectively in the hands of OpenAI with no meaningful way to retrieve it.
Now replace "source code" with a client's affidavit. Replace "meeting notes" with a litigation strategy memo. The mechanism is identical. The professional consequences, for a law firm, are considerably more severe.
And the uncomfortable truth is that this is already happening in small Ontario law firms. According to recent industry data, 79% of lawyers are using AI tools in their practice — but only 10% of firms have any policies governing that use. That gap isn't a technology problem. It's a professional responsibility problem that most managing partners haven't fully confronted yet, because the tools are quiet about what they do with your data, and the lawyers using them have no reason to think twice.
This is what's sometimes called "shadow AI" — not rogue behaviour, just well-intentioned people using tools that make their work easier, without any awareness that those tools carry real risk. A paralegal uploads a brief to get a faster summary. A partner dictates notes from a client call into a voice-to-text AI tool. A junior associate uses an AI grammar checker that retains document text. Nobody's being reckless. The risk isn't in the intent — it's in the infrastructure.
What Your Professional Obligations Actually Require
The Law Society of Ontario has been clear that its cybersecurity expectations for member firms are not optional. Lawyers have a professional duty to protect client information, which includes taking reasonable measures to prevent unauthorized access or disclosure. The standard is deliberately technology-neutral: it doesn't matter whether the exposure happens through a ransomware attack or through a paralegal's well-meaning use of an AI summarization tool. What matters is whether your firm took reasonable precautions.
That's where AI use without governance creates specific exposure. Under PIPEDA — Canada's federal privacy law — collecting or processing personal information requires informed consent and appropriate safeguards. If your firm is routinely passing client information through third-party AI platforms, there's a meaningful question about whether that meets the standard. Does your client know their materials were processed by an AI system operated by an American company? Did they consent to that? Does your engagement letter address it?
These aren't rhetorical questions. They're the questions a regulator or plaintiff's lawyer would ask after something goes wrong.
And things do go wrong. In 2024, a Florida law firm faced a class action lawsuit after a data breach exposed client information. The firm settled for US$8.5 million. That number is striking not because it's exceptional, but because it increasingly isn't. As AI adoption accelerates and the tools become more capable of handling more sensitive work, the exposure grows proportionally. Clients are also beginning to ask these questions proactively — many in-house legal teams now assess the cybersecurity posture of external counsel before engaging them. Your AI governance — or lack of it — is becoming part of your professional profile.
There's a Better Way to Do This — and Most IT Vendors Aren't Telling You About It
The conventional advice when this topic comes up is either to ban AI tools entirely or to pay for enterprise versions of the same commercial products with slightly better privacy terms. Both options are unsatisfying. Banning AI doesn't work — it just drives usage underground and removes any chance of governance. And enterprise licensing agreements, while better than consumer versions, still involve third-party servers, third-party data handling policies, and a fundamental dependency on a vendor's continued commitment to your privacy.
There's a third option that doesn't get nearly enough attention: open source AI models deployed within your own environment.
The concept is simpler than it sounds. Open source AI models are AI systems that can be installed and run on your own infrastructure — a private server, a managed cloud environment that you control — rather than accessed through someone else's platform. The model runs inside your network. Your data never leaves. There's no vendor on the other end receiving your inputs, no terms of service governing what happens to your clients' information, no training pipeline that might incorporate what you've shared.
Think of it the way you'd think about file storage. You could store client documents on a public Dropbox account — convenient, but your files are on someone else's servers, subject to their policies. Or you could store them on a secured server your firm controls, where access is governed entirely by your own policies. Open source AI is the same principle applied to AI: the capability without the third-party dependency.
The tradeoff worth being honest about is capability. Open source models are generally somewhat less capable than the top commercial tools. For highly complex reasoning tasks, that gap is real. But for the work most law firm staff are actually doing with AI — summarizing documents, drafting routine correspondence, organizing research notes, preparing first drafts — the capability is more than sufficient. And for that work, privacy is non-negotiable. The tradeoff is a reasonable one.
What This Actually Looks Like for a Firm Your Size
A 5–10 person law firm can realistically deploy this. It's not a months-long infrastructure project, and it doesn't require your lawyers to learn anything new. From their perspective, they log into a tool that looks and behaves like the AI interfaces they're already using. The difference is what's happening underneath — and that's your IT provider's domain, not yours.
Your IT partner handles the deployment, the security configuration, and the ongoing maintenance. Your lawyers get a private, governed AI environment that they can use without you lying awake wondering what's happening to your clients' data. The implementation timeline, with the right provider, is measured in weeks, not months.
The question is whether your current IT provider is thinking about this at all. Most aren't — not because they're incompetent, but because most small firm IT support is reactive by nature. It focuses on keeping existing systems running, not on anticipating the new categories of risk that emerge as technology evolves. An open source AI deployment isn't something that happens by default. It requires a provider who's paying attention to where the risks are heading and getting ahead of them on your behalf.
That's the right question to ask your IT partner: are they proactively raising this with you, or are they waiting for you to figure it out on your own? Are they familiar with private AI deployment? Do they have a view on how your current setup handles the data your team is putting into AI tools today? The answers will tell you a great deal about whether your IT relationship is working the way it should.
The Firms That Get Ahead of This Will Have a Real Advantage
There's a competitive dimension to this that's easy to miss when you're focused on risk avoidance. The managing partners who establish clear AI governance in the next twelve months — who can tell prospective clients how their data is handled, who can demonstrate that their firm uses AI responsibly — will have something most of their competitors don't: a credible answer to a question that sophisticated clients are increasingly asking.
The alternative is to wait. To assume the exposure is theoretical until it isn't. To address the problem after an incident, when the costs — financial, reputational, regulatory — are vastly higher than they would have been if you'd acted earlier.
AI is not going away. Your team is already using it. The only real question is whether your firm has a thoughtful answer for what happens to your clients' data when they do.
If you're not sure, that's worth a conversation — not with your lawyers, but with your IT provider. And if that conversation hasn't happened yet, it might be time to find a provider who's already thought about it.
Boximity works with small professional services firms across Ontario to build IT environments that are secure, proactive, and built around how you actually work. If you'd like to talk about how your firm is handling AI, we're happy to start there.