On 20 January 2026, the House of Commons Treasury Select Committee (TSC) published a report on artificial intelligence in financial services following a review of the opportunities and risks posed by AI for the UK financial services sector. The TSC found that there has been significant adoption of AI technologies by UK financial services firms, particularly among insurers and international banks.
One of the key themes considered in the TSC’s report is how existing principles-based regulation in the UK is to be used by the FCA and PRA to supervise financial services firms’ use of AI, in circumstances where (unlike other parts of the world) the UK does not currently have AI-specific regulations.
The TSC’s key conclusions were that:
- The FCA and PRA are not currently doing enough to manage the risks presented by AI;
- the regulators should conduct AI-specific stress testing and designate the major AI and cloud providers under the Critical Third Parties Regime; and
- by the end of 2026, the FCA should publish comprehensive, practical guidance for firms on (a) the application of existing consumer protection rules to their use of AI, and (b) accountability and the level of assurance expected from senior managers under the Senior Managers and Certification Regime (SMCR) for harm caused through the use of AI.
Ahead of any new guidance on the subject from the regulators, this article looks at the issue of individual accountability in relation to AI, and in particular some of the challenges in applying the current SMCR.
A specific SMF with responsibility for AI systems?
Under the current SMCR, the FCA and PRA can take action against a senior manager for:
- failing to take such steps as they may reasonably be expected to take to prevent regulatory breaches occurring or continuing in their business area (which is defined by their Statement of Responsibility); and
- failing to comply with the FCA/PRA Code of Conduct, which among other matters requires a senior manager to take reasonable steps to ensure that the business of the firm for which they are responsible is: (i) controlled effectively; and (ii) complies with the ‘relevant requirements and standards of the regulatory system’. Further, where the FCA’s Consumer Duty applies, the Code requires senior managers (like other staff) to ‘act to deliver good outcomes for retail customers’.
One of the first key questions that arises when seeking to apply the existing SMCR to AI in financial services is identifying which senior managers within a firm are or may be responsible for it. Numerous individuals may be involved, not only in the approval and roll out of such tools, but also their ongoing performance.
During the oral evidence phase of the TSC’s work, the FCA and PRA maintained their position that a dedicated new SMF responsible for AI within firms is not required. In practice, for many firms, that is likely to leave the focus on the SMF24 (Chief Operations function) given that technology systems are normally under their responsibility. Separately, the SMF4 (Chief Risk function) normally has responsibility for overall management of the risk controls of a firm, including the setting and managing of its risk exposures.
In principle, however, and especially where AI tools have been deployed widely across a firm, each senior manager will be responsible for the discharge of obligations within their area of responsibility in a way that could be affected by the performance (or failure) of AI tools.
For example, a senior manager with responsibility for ensuring the proper handling of consumers’ applications for credit products, or for the proper handling of consumers’ insurance claims, or the proper handling of regulated complaints, could have responsibility for customer outcomes which AI tools are increasingly being used to achieve.
As with any responsibility that is being shared between numerous business areas, complexity can arise over the allocation and division of specific responsibilities. In practice, the widespread deployment of AI tools across a financial services firm could engage the duty of responsibility of numerous senior managers. It is unlikely in our view that a single senior manager could point to those with specific responsibility for I.T. systems and overall risk management to exclude their own responsibility. It is in our view accordingly important for each senior manager to understand that they may be held accountable for failing to take ‘reasonable steps’ in relation to the use of AI tools in their own business area.
What are ‘reasonable steps’ in relation to AI tools?
The uncertainty over exactly what standard of behaviour is required of a senior manager is at the heart of the issue and, we believe, the TSC’s recommendation that the FCA and PRA should publish further guidance. The TSC appears to have received evidence that a lack of clarity in this area, and concern over individual liability risks, may be inhibiting the adoption of AI tools in some areas.
Understanding what ‘reasonable steps’ are in relation to AI tools, and what are the ‘relevant requirements of the regulatory system’, is challenging in circumstances where (as now) there is generic guidance as to the requirements of the SMCR, coupled with some history of regulatory enforcement activity decided mainly in a pre-AI era, but no specific AI-related guidance.
In our view, existing FCA and PRA guidance and enforcement cases do provide part of the answer. Past enforcement cases emphasise, for example, the role of governance in the roll out and migration of IT systems. Reading across decisions from past cases, the existing SMCR surely requires Boards, and senior managers individually within their own business areas, to give careful consideration to the use case for new AI tools, to understand their key dependencies, to assess carefully how such tools are prepared and tested, to take care in how they are approved, and to evaluate how well they perform in practice. These should tie in with operational resilience assessments and reliance on critical third parties. Management information, data and ongoing monitoring will all be essential in that regard, as they would be in relation to the deployment and roll out of older pre-AI systems.
What if the AI tool behaves unreasonably?
A critical issue is, however, that the existing SMCR does not provide a full answer, in our view because many new AI tools are by their nature different to tools that have gone before. They are increasingly capable of autonomous decision making.
AI systems are sometimes currently being deployed with the objective not of assisting human decision making, but of replacing it altogether. This may be to improve speed, customer service quality, to save cost, or for other reasons. However, when the steps being taken in relation to a particular consumer are being taken by the software, not the human, the application of the current regime (designed with humans in mind) becomes far less clear.
Some challenges include, for example:
- Who is responsible for what in the event of a system failure or a poor outcome for a retail customer? What is the role, for example, of software engineers outside the regulated business? Or operational staff within the business without full knowledge of the operation of the technology? Or other senior managers working outside the business area where the problem occurred, but had a deeper understanding of the relevant technology? Or a senior manager who approved the use of a tool but did not have responsibility for its ongoing performance?
- What standard of behaviour is to be expected from a human senior manager who may only have had residual involvement in the chain of events leading up to the issue? In the FCA’s guidance on general factors for assessing compliance with its conduct rules, for example, (COCON 3.1) the Authority emphasises the concept of “personal culpability”, and in particular “whether the conduct was deliberate” and “whether the standard of conduct was below that which would be reasonable in the circumstances”. We see these issues as critical. The issue of what (human) conduct is “reasonable” in circumstances where an AI tool has behaved “unreasonably” could well be highly controversial.
Take the case, for example, of a senior manager who takes a “deliberate” decision to implement an AI tool with the best of intentions, on the basis of evidence that it tested well and materially improved the speed of decision making for consumers during the pilot phase. The senior manager receives advice before implementing the system that there is a risk of it performing unexpectedly in some cases due to inherent limitations in the technology, which simply reflect the state of its development at the time and cannot be overcome. Mitigation measures are put in place but are not perfect. The technology goes on to provide materially better customer outcomes for many customers than humans would have done. Average processing times to reach a customer outcome fall from a month to two days. In 95% of customer cases the substantive customer outcome is the same as it otherwise would have been, but much faster. Unfortunately, in the other 5% the AI tool fails badly, leading to serious consumer detriment. We believe there would be strong arguments that in this situation the senior manager is not “personally culpable” for having approved use of such a tool. However, applying the regime in its current form, the matter is arguable.
Whilst a Board or senior manager might mitigate personal risks by taking additional training and/or obtaining expert advice, these also raise issues of cost and timing. In some cases, particularly where the AI tool is proprietary / ‘black box’, obtaining advice or carrying out detailed due diligence on the technology may not be realistic at all.
Some of the evidence before the TSC goes to this issue. In particular, the TSC heard evidence that, in reality, the complex but opaque nature of some AI tools meant that some senior managers struggled to assess the risks associated with them, particularly in terms of harm to consumers.
Governance and role of the Board
If they have not done so already, UK regulated firms considering the use of AI tools will need to have assessed carefully and updated their corporate governance frameworks for their approval and oversight, and have clearly charted how escalation pathways should operate in practice, both for approvals and for responding to issues as they emerge. This could include in our view giving consideration to where in the firm’s SMCR implementation – particularly management responsibility maps – responsibility for AI-related issues is intended to lie.
Staff across a firm’s hierarchy will need to assess materiality of AI tools and ensure that those with the greatest potential impact or novelty of approach are escalated for consideration by senior management.
The Board needs to be able to challenge what it is being told so that proposals to deploy AI tools with the most material impact can be scrutinised. Due to the high likelihood of regulator stress-testing AI-driven market shocks, Boards can get ahead of this by carrying out their own internal stress testing to help flush out problems and solutions.
We welcome the TSC’s recommendation that the regulators should publish additional guidance for firms in this area. In addition, we note that on 27 January 2026, the FCA launched a review into the long-term impact of AI on retail financial services requesting inputs by 24 February 2026 to allow for a public report available later this year. There is clearly more to come from the regulators in this area.
In the meantime, for senior managers currently wrestling with the potential impact of the SMCR on business improvement through AI, we would suggest it is key to focus on issues such as:
- understanding / mapping out where AI technology is actually being used in their business area;
- what the firm’s SMCR implementation says about the allocation of responsibility (for example in management responsibility maps);
- the firm’s use of ‘guard rails’ in the approval of AI tools and their performance;
- responding rapidly and effectively to the emergence of any problems.