legal-tech · AI · compliance

AI Compliance for UK Law Firms: SRA, FCA, and What You Need to Know

UK law firms adopting AI tools need to navigate SRA professional conduct rules, FCA requirements for financial services work, and data protection obligations. A practical guide.

Evgeny Smirnov ·

The UK doesn’t have an AI-specific law equivalent to the EU AI Act (yet — the government’s pro-innovation approach may evolve). But UK law firms using AI tools are already subject to a web of existing obligations that apply to AI adoption: the Solicitors Regulation Authority (SRA) Code of Conduct, the Financial Conduct Authority (FCA) rules for firms doing regulated financial work, UK GDPR and the Data Protection Act 2018, and professional indemnity considerations.

We’re a UK-based development team (4xxi Software Ltd) building AI tools for legal and financial clients globally. We’ve had to work through these requirements with our clients, and the practical reality is more nuanced than most guides suggest.

SRA obligations that apply to AI use

The SRA doesn’t have specific AI rules. Instead, existing principles in the SRA Code of Conduct for Solicitors and the SRA Code of Conduct for Firms apply to how AI is used in practice. The key ones:

Competence (Principle 2 and para 3.3 of the Code for Solicitors). You must maintain competence in your areas of practice. If you’re using AI for legal research, drafting, or advice, you need to understand the tool’s capabilities and limitations well enough to supervise its output. Blindly submitting AI-generated text to a court without checking it would likely breach this obligation. The hallucination cases from US courts — lawyers fined for submitting AI-fabricated citations — illustrate exactly the kind of failure the SRA would view seriously.

Service to clients (Principle 7). You must act in the best interests of each client. This means considering whether AI tools are appropriate for a particular matter, whether their use could compromise quality or confidentiality, and whether the client has consented to AI being used on their work. The SRA hasn’t mandated client disclosure of AI use, but transparency with clients about how their work is handled is consistent with this principle.

Confidentiality (para 6.3–6.5). You must keep client affairs confidential. This has direct implications for AI tool selection. If your AI tool sends client data to a third-party cloud service, you need to be confident that the provider’s data handling meets your confidentiality obligations. Cloud-based AI tools that use client data for model training are particularly problematic — check terms of service carefully.

Supervision (para 3.5 of the Code for Firms). Authorised bodies must ensure effective governance and appropriate supervision. If junior associates or paralegals use AI tools, someone needs to supervise the output. AI doesn’t reduce the supervision obligation — if anything, it increases it, because AI errors can be subtler than human ones.

FCA considerations for firms doing financial work

Many UK law firms handle regulated financial work — financial services litigation, regulatory advice, anti-money laundering compliance, investment fund structuring. If you’re in this space and using AI, the FCA’s expectations add another layer.

The FCA has been relatively clear that firms using AI for regulated activities remain responsible for the outputs. Their approach is technology-neutral: the same standards of accuracy, fairness, and consumer protection apply regardless of whether a human or an AI performs the task.

For AI tools used in financial crime compliance (AML/KYC), the FCA expects firms to understand the tool’s methodology, validate its effectiveness, and maintain human oversight. Black-box AI that flags or clears transactions without explainable reasoning is unlikely to satisfy regulatory expectations.

If your firm uses AI for client-facing financial advice — even indirectly, such as AI-assisted research that feeds into advisory outputs — the conduct rules around suitability and fair dealing apply to the final output regardless of how it was produced.

Data protection: UK GDPR implications

UK GDPR applies to any AI tool processing personal data — which in a law firm context means almost everything. Key requirements:

Lawful basis. If your AI tool processes client personal data (names, case details, financial information), you need a lawful basis. Legitimate interests or performance of a contract are the most likely bases, but you need to document this through a legitimate interests assessment.

Data minimisation. Only process the personal data necessary for the specific task. If your AI tool ingests entire client files when it only needs a specific document, that’s potentially excessive.

Data protection impact assessment (DPIA). If your AI processing is likely to result in high risk to individuals’ rights — which legal AI often does, given the sensitivity of legal matters — you should conduct a DPIA before deployment.

International transfers. If your AI tool processes data outside the UK (which most cloud-based tools do), you need adequate transfer mechanisms. The UK adequacy decision with the EU helps, but transfers to the US require additional safeguards (typically standard contractual clauses).

Practical steps for UK law firms

Here’s what we recommend to our UK clients:

Conduct an AI inventory. Document every AI tool in use across the firm — including personal use of ChatGPT by individual lawyers. You can’t manage risk you don’t know about.

Develop an AI use policy. Define what AI tools are approved, what they can and can’t be used for, what supervision is required, and how outputs must be checked. This doesn’t need to be complex — a clear one-page policy is better than a comprehensive document nobody reads.

Review vendor contracts. For every AI tool, check: where does client data go? Is it used for model training? Who has access? What are the data retention terms? Can you get an adequate data processing agreement?

Train your team. The SRA’s competence obligation extends to understanding the tools you use. Ensure everyone using AI tools understands what they can and can’t do, how to check outputs, and when to escalate.

Consider client communication. While not currently required by the SRA, informing clients about AI use in their matters builds trust and may become expected practice. Some firms include AI disclosure in their engagement letters.

“The UK regulatory approach to legal AI is principles-based rather than prescriptive, which gives firms flexibility but also creates ambiguity. Our advice is always the same: if you wouldn’t be comfortable explaining your AI use to the SRA in a compliance audit, reconsider the approach.”

— Evgeny Smirnov, CEO and Lead Architect:

Looking ahead

The UK government’s approach to AI regulation is still developing. The EU AI Act, which takes full effect in August 2026, will influence UK practice even without direct application — firms with EU clients will need to consider it, and the SRA is watching how the Act plays out. The Law Society has published guidance on AI use, and more specific SRA guidance is likely as adoption increases.

The firms that invest in AI governance now — not just in tools, but in policies, training, and oversight — will be better positioned regardless of how regulation evolves.


Need help ensuring your AI tools meet UK regulatory requirements? Contact us — we build legal AI with SRA, FCA, and data protection compliance in mind from day one.