6 May 2026
Share Print

Artificial Intelligence in UK financial services: A regulatory map and practical considerations

To The Point
(8 min read)

UK regulators are clear that no AI‑specific financial services rules are coming in the near term; instead, AI must be governed under existing frameworks such as the Consumer Duty, SMCR, model risk management and operational resilience. Supervisory attention has intensified following Parliamentary scrutiny, with banks already subject to active enforcement under the PRA’s SS1/23 model risk regime and FCA‑regulated firms expected to meet equivalent standards through existing governance rules, despite the absence of explicit guidance. Key friction points remain around senior manager accountability, model risk for advanced and generative AI, third‑party and cloud concentration, and whether AI risks should be capitalised through ICAAP. The clear message is that firms should not wait for bespoke AI regulation: regulators expect AI to be governed now, with documented accountability, risk appetite, explainability and resilience testing already in place.

Executive summary

The FCA, PRA, Bank of England, HM Treasury and Parliament have each engaged with artificial intelligence in financial services over the past 18 months. The resulting outputs range from supervisory statements with immediate practical consequences, to Parliamentary inquiries and reports that apply pressure without creating binding rules. This briefing draws that landscape together. Its purpose is practical: to explain why AI has recently been back in focus, what already applies to firms today, and where the material gaps and frictions remain for both solo and dual-regulated firms. 

This article synthesises our ongoing engagement with regulators and industry roundtables on AI, and our advisory work supporting firms on the key gaps, tensions and emerging regulatory expectations. We would be pleased to discuss how these issues affect your organisation in practice.

Why AI is in focus:

  • AI has recently been prominent in regulatory and political debate following the Treasury Select Committee’s inquiry into AI in financial services. That inquiry crystallised a tension that firms are already experiencing in practice: whether AI requires bespoke regulation, or whether the UK’s existing regulatory framework, built around a principles‑based architecture but anchored in specific regimes addressing conduct, model risk and operational resilience,  already provides adequate coverage.
  • The regulators’ position. The regulators are aligned in their core message: they do not propose to introduce AI‑specific rules for financial services. Instead, they consider that existing frameworks — including the Consumer Duty, the Senior Managers and Certification Regime (SMCR), SS1/23 on model risk management, the operational resilience regime and the outsourcing and third‑party rules — already apply to AI use cases and are capable of capturing AI‑related conduct and risk.
  • What is changing — and what is not? While the regulatory architecture itself is not being rewritten, regulators have acknowledged that firms need greater clarity on how those existing rules apply in practice to AI, particularly more advanced and generative models. The FCA has therefore committed to producing best‑ and poor‑practice guidance, rather than new rules, with further supervisory material expected through 2026.
  • The dual‑regulated overlay. Banks and other dual‑regulated firms face the additional complexity of navigating two regulators whose coordination on AI is improving, but not yet fully aligned across supervision, model risk and operational resilience.
  • Where uncertainty remains for firms. The UK’s technology‑neutral regime was not designed with generative AI in mind, and parts of its application remain untested. The most acute areas of uncertainty are:
    • Model risk: SS1/23 pre‑dates widespread generative AI use and does not address all associated risks explicitly.
    • SMCR accountability: there is no prescribed Senior Manager responsibility for AI, leaving firms to determine governance and personal accountability.
    • Third‑party concentration: reliance on a small number of cloud and AI providers continues to grow, but no Critical Third Parties have yet been designated.
  • The practical takeaway. Firms should not wait for further clarification. Regulators expect AI to be governed now, under existing rules. In practice, this means mapping AI use cases against current regulatory obligations, assigning clear SMCR accountability, testing AI‑related third‑party dependencies, and documenting governance decisions so they can be evidenced to supervisors.

Explore the full article's analysis

Next steps

If you have a query that you would like to discuss, please get in touch with one of our specialists

To the Point 


Subscribe to receive legal insights and industry updates directly into your inbox

Sign up now