6 February 2026
Share Print

Can senior managers be liable under the UK regulatory regime for decisions made by AI?

To The Point
(5 min read)

As AI tools are rapidly rolled out across UK financial services, the extent to which firms’ senior managers may be held liable under the existing regulatory regime for their approval and performance has become a live issue.  In the absence of a regulatory regime adapted specifically for AI, uncertainty arises as to where responsibility may lie for compliance failures and consumer harm resulting from steps taken by the relevant AI tool, rather than from the actions of a human being that the regime was originally designed to regulate.  In this article, David Pygott (Partner) and Ross McCartney (Legal Director) look at the UK Treasury Select Committee’s recent review of the area and consider some of the key issues a senior manager should bear in mind.

On 20 January 2026, the House of Commons Treasury Select Committee (TSC) published a report on artificial intelligence in financial services following a review of the opportunities and risks posed by AI for the UK financial services sector.  The TSC found that there has been significant adoption of AI technologies by UK financial services firms, particularly among insurers and international banks.

One of the key themes considered in the TSC’s report is how existing principles-based regulation in the UK is to be used by the FCA and PRA to supervise financial services firms’ use of AI, in circumstances where (unlike other parts of the world) the UK does not currently have AI-specific regulations.

The TSC’s key conclusions were that: 

  • The FCA and PRA are not currently doing enough to manage the risks presented by AI;
  • the regulators should conduct AI-specific stress testing and designate the major AI and cloud providers under the Critical Third Parties Regime; and
  • by the end of 2026, the FCA should publish comprehensive, practical guidance for firms on (a) the application of existing consumer protection rules to their use of AI, and (b) accountability and the level of assurance expected from senior managers under the Senior Managers and Certification Regime (SMCR) for harm caused through the use of AI.

Ahead of any new guidance on the subject from the regulators, this article looks at the issue of individual accountability in relation to AI, and in particular some of the challenges in applying the current SMCR.

A specific SMF with responsibility for AI systems?
What are ‘reasonable steps’ in relation to AI tools?
What if the AI tool behaves unreasonably?
Governance and role of the Board

We welcome the TSC’s recommendation that the regulators should publish additional guidance for firms in this area.  In addition, we note that on 27 January 2026, the FCA launched a review into the long-term impact of AI on retail financial services requesting inputs by 24 February 2026 to allow for a public report available later this year.  There is clearly more to come from the regulators in this area.

In the meantime, for senior managers currently wrestling with the potential impact of the SMCR on business improvement through AI, we would suggest it is key to focus on issues such as:

  • understanding / mapping out where AI technology is actually being used in their business area;
  • what the firm’s SMCR implementation says about the allocation of responsibility (for example in management responsibility maps);
  • the firm’s use of ‘guard rails’ in the approval of AI tools and their performance;
  • responding rapidly and effectively to the emergence of any problems.

Next steps

If you would like to discuss this further, please get in touch with the authors.

To the Point 


Subscribe to receive legal insights and industry updates directly into your inbox

Sign up now