Agentic AI refers to systems that are capable of perceiving their environment, setting or interpreting goals, making decisions, and executing actions with limited human intervention. This represents a shift from traditional rule-based systems, that just react to prompts like traditional or generative AI.
In the payments context, this distinction is critical. An agentic AI system might not merely recommend a payment action but initiate, schedule, route, or block transactions autonomously. There are several plausible use cases for agentic AI within the UK payments ecosystem. Most notably would be integration of payments into agentic commerce, which would allow for AI agents to scan for goods or services, select an appropriate one within the parameters provided, and crucially execute the payment transaction. An AI agent could also dynamically select the most appropriate payment rail, such as Faster Payments, CHAPS, or card-based payments, based on transaction value, urgency, fraud risk, or cost considerations.
In the absence of specific regulations on agentic payments, the deployment of agentic AI in payments must be assessed against the existing UK regulatory framework. Below is a non-exhaustive list of the key regulatory hurdles that will need to be overcome for agentic AI to be used confidently for payments:
Regulatory Status & Licensing
Perhaps most crucially will be determining whether the AI systems or their operators need payment-specific licensing. Systems which simply store or pass on payment instrument details to merchants may not require any authorisation, but may need to comply with data protection requirements. On the other hand, systems that can initiate payments on behalf of payers or have an integrated wallet holding funds may be viewed as performing a regulated financial service. If the regulator determines the AI system is carrying out a regulated financial service, then the question is who exactly requires the licensing; whether the platform provider, the operator or another institution involved in the structure.
Consent
Under the current UK regulations, there are rules regarding authorised and unauthorised transactions. Payment Service Providers (PSPs) are liable for reimbursing certain unauthorised transactions; whereas customers are responsible for keeping their security credentials safe. Secure customer authentication (SCA) is required by legislation for certain payment transactions to be made which verifies a customer’s identity using two independent factors containing something known, something possessed and something intrinsic, such as a PIN and a physical card or a phone and facial recognition. The idea being that a PSP should be able to validate that it is their customer providing the instruction and consent to a payment transaction. Integrating with an agentic AI could complicate these requirements. As these systems become more autonomous, it will become more difficult for PSPs to demonstrate a transaction was appropriately authorised by or on behalf of the customer. In addition, SCA becomes impossible to perform in a world where the transactions are taking place autonomously without direct payer customer interaction.
While SCA is a regulatory requirement payment services firms must follow for now, many firms have more sophisticated fraud detection techniques. AI can also be used to identify fraud, through behavioural biometrics and profiling, transaction monitoring and other techniques. Whilst this may help with the detection of fraudulent transactions, it does not solve the consent / authorisation conundrum.
Unauthorised transactions
The consequence of not being able to validate whether a transaction has been duly authorised by or on behalf of a customer is that this creates increased liability risk for PSP. Current frameworks require PSPs to refund unauthorised transactions unless gross negligence with security requirements or intentional deception is proven, but this was designed around human behaviour, not autonomous agents. These existing liability and reimbursement regimes will likely concern PSPs looking to integrate with agentic AI or allow their customers to transact using their payment instruments.
Consumer Duty
PSPs have obligations under consumer duty to deliver good outcomes for retail customers, and are expected to act in good faith, avoid causing foreseeable harm, enable and support customers to pursue their financial objectives. Further requirements apply where customers are considered vulnerable. PSPs will need to ensure the customer’s use of agentic AI to make payments meets their consumer duty obligations. For example, whether such use could cause foreseeable harm or whether it could disproportionately affect vulnerable customers. On the other hand, agentic AI could allow some customers to pursue their financial objectives where they otherwise may not be able to. These obligations will need to be weighed up when connecting with these systems. To the extent the agent is a PSP, they will have their own obligations and will need to ensure these are considered carefully.
Governance & Risk
If the agentic AI is a PSP or an outsource provider, there will need to be governance and control procedures in place. These may include human-in-the-loop or human-on-the-loop oversight, where autonomy is constrained based on transaction value or risk profile. Clear audit trails and logging of AI decisions will be essential for regulatory scrutiny. Ultimately, firms cannot delegate regulatory responsibility to technology; they must be able to demonstrate effective oversight and control.
In addition, the current accountability frameworks such as Payment Services Individuals (PSD Individuals) assign individual responsibility to certain individuals within a PSP. Where a PSD Individual is responsible for agentic AI payments (either because the agent is part of the PSP or the PSP allows their payment instruments to be used through an agent), that individual will need to have an appropriate level of control over the systems or use of the systems, which seemingly counters the autonomous nature of these models.
The UK market is in a strong position to adopt agentic AI systems for payments due to its current Open Banking ecosystem. Payment Initiation Service Providers (PISPs) occupy a central position in the UK’s Open Banking ecosystem, operating under the Payment Services Regulations 2017 and supervised by the Financial Conduct Authority (FCA). Their core function is to initiate payments on behalf of customers, with the customer’s explicit consent, directly from their bank account. Agentic AI has the potential to significantly expand the functional role of PISPs. An AI-enabled PISP might interpret ongoing customer intent, manage recurring or conditional payments, and optimise payment timing or routing without requiring explicit instruction for each transaction.
Key questions arise around who is deemed to have “initiated” a payment when an AI agent acts autonomously, and how liability should be allocated if an error or harm occurs. As regulated entities, PISPs would remain responsible for the actions of their systems, even where those systems operate with a high degree of autonomy. This concentrates both opportunity and risk within the PISP model.
There are certain aspects of the current regulatory framework discussed here, which could cause significant barriers to the adoption of agentic AI for payments, at least for the more autonomous systems. While the FCA has historically looked to require new technologies to integrate with the existing regulatory framework, we are expecting a new Payments Services regulatory regime to be introduced with consultations starting this year. As such, there is an opportunity to reassess some of these rules for these new technologies while maintaining customer protections.