In January 2026, the UK Treasury Committee published a detailed report on the use of artificial intelligence in financial services. Its conclusion is blunt: AI is spreading fast across the sector, but regulators are not keeping pace with the risks it brings.
Around three-quarters of UK financial services firms now use some form of AI. Banks, insurers, and payment providers rely on it for fraud detection, credit decisions, customer support, and operational efficiency. The government sees this as a growth engine and actively promotes adoption. The problem is not ambition. The problem is governance.
The Committee’s inquiry found that regulators are taking a “wait and see” approach. There is no AI-specific financial regulation in the UK. Instead, the Financial Conduct Authority and the Bank of England rely on existing frameworks. Both believe those tools are sufficient. Parliament does not agree.
What AI Changes for Consumers
The report highlights four concrete risks for consumers.
First, AI-driven credit and insurance decisions are often opaque. People are declined without clear explanations. Second, automated product tailoring risks deepening financial exclusion, especially for vulnerable groups. Third, consumers increasingly receive unregulated financial guidance from AI tools and search engines, which may be misleading. Fourth, AI lowers the cost of fraud at scale, raising the volume and sophistication of scams.
Regulators monitor complaints, social media, and firm behaviour. They have also launched controlled testing environments, such as the FCA’s AI Live Testing service and its Supercharged Sandbox. These are constructive steps. Yet they reach only a small number of firms and remain voluntary.
Industry feedback to the Committee is consistent: firms lack practical clarity on how existing rules apply to AI. Responsibility is especially unclear under the Senior Managers and Certification Regime. Leaders are personally accountable for harm, but AI systems are hard to explain, audit, and predict. The result is hesitation. Some firms slow down adoption. Others move forward without confidence that they are compliant.
Financial Stability Is the Quiet Risk
Beyond consumers, the report focuses on systemic risk.
AI increases cyber exposure. It concentrates operational dependencies on a small group of US-based cloud and AI providers. It may also amplify herding behaviour in markets, where models trained on similar data respond in the same way at the same time.
The Bank of England already runs cyber and operational stress tests. What it does not run are AI-specific scenarios. Members of the Financial Policy Committee told the inquiry that such testing would be valuable. The Committee agrees and formally recommends it.
There is also a legal framework designed to manage concentration risk: the Critical Third Parties Regime. It gives regulators oversight powers over firms that provide essential infrastructure to the financial system, including cloud and AI providers. The framework exists. The rules are written. Yet, more than a year after its creation, no company has been designated under it.
This gap became tangible in October 2025, when an Amazon Web Services outage disrupted major UK banks. Parliament asked why no major provider had yet been brought into the regime. The Treasury’s answer was procedural and non-committal.
The Committee’s message is simple: the tools exist. They are not being used.
Key takeaways for fintech startups
For founders and leadership teams, the report offers a few clear signals about where the environment is heading:
- AI adoption in financial services is now mainstream, not experimental.
- Regulators expect firms to manage AI risks within existing frameworks, even when guidance is unclear.
- Accountability for AI outcomes sits with senior management.
- Consumer harm from opaque or biased models is a priority concern.
- Systemic risk from shared infrastructure and automated behaviour is rising.
- More explicit rules and stress testing are likely within the next 12 to 24 months.
The direction of travel is toward tighter scrutiny, not deregulation. Startups that treat AI as a pure product feature will struggle. Those that treat it as a regulated capability will be better positioned.
If you are building or scaling a fintech product that relies on AI, now is the time to pressure-test your assumptions around explainability, accountability, and resilience. If you want help making that real, contact us. We work with founders who want to grow without creating risk they cannot control.