Alert
April 25, 2024

The FCA’s AI Update: Integrating The UK Government’s 5 Principles

In our previous alert AI and Machine Learning in UK financial services: the public response to the FCA and PRA | Insights & Resources | Goodwin (goodwinlaw.com), we discussed the response in FS 2/23 of the Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) to its discussion paper on artificial intelligence (AI) and machine learning.

On 22 April 2024, the FCA published its AI Update (fca.org.uk) (Update). It follows the UK Government’s publication of its pro-innovation strategy, A pro-innovation approach to AI regulation: government response - GOV.UK (www.gov.uk) (Govt Response), in February of this year.

In the Update, the FCA welcomed the Government’s principles-based, sector-led approach to AI regulation, which will see it, along with the PRA, taking the lead on regulating the use of AI in the financial sector. The Update also affirms the trends we noted in our previous alert.

The Update coincided with a joint letter DSIT-HMT letter (bankofengland.co.uk) from the PRA and FCA to the ministers on their strategic approach to AI and a speech Navigating the UK’s Digital Regulation Landscape: Where are we headed? | FCA by the FCA CEO, which touched on AI regulation.

Restating The FCA’s General Role and Main Focus

The FCA restates its role as a “technology-agnostic, principles-based and outcomes-focused regulator” charged with regulating financial services providers and financial markets; it is not a regulator of technology.

The FCA’s focus is on how firms can safely and responsibly adopt AI technology, as well as on understanding what impact AI innovations are having on consumers and markets. This includes close scrutiny of the systems and processes firms have in place to ensure the FCA’s regulatory expectations are met.

Linking the Government’s Five Principles to Specific Regulation

In its 2023 consultation AI regulation: a pro-innovation approach - GOV.UK (www.gov.uk), the Government identified the following five principles as key when it comes to the regulation of AI in the UK. (See our alert Overview of the UK Government’s AI White Paper | Insights & Resources | Goodwin (goodwinlaw.com). In response to each of these principles affirmed in the Govt Response, the FCA outlines how its existing regulatory framework maps to each of the principles:

  • Safety, security, robustness. The FCA points to its Principles for Businesses — especially Principle 2 (due skill, care and diligence) — and more-specific rules related to risk controls under the FCA rules and guidance in its SYSC sourcebook (section 7), as well as general organisational requirements under SYSC (section 4). The FCA also highlights its work on operational resilience, outsourcing and critical third parties (CTPs). Especially the requirements under SYSC 15A (Operational Resilience) aim to ensure relevant firms are able to respond to, recover, learn from and prevent future operational disruptions.
  • Appropriate transparency and “explainability”. The FCA notes that its regulatory framework does not specifically address the transparency or explainability of AI systems. It points, however, to high-level requirements, and principles under our approach to consumer protection, including the Consumer Duty, may be relevant to firms using AI safely and responsively in the delivery of financial services.
  • Fairness. The FCA responds to this principle — stated more fully as the principle that AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes — by pointing to its rules for consumer protection and the recently adopted Consumer Duty regime. (See our alert The UK Consumer Duty: Next Steps For Private Fund Managers | Insights & Resources | Goodwin (goodwinlaw.com) on the Consumer Duty.)
  • Accountability and governance. The FCA points to a range of rules and guidance pertaining to firms’ governance and accountability arrangements, which will be relevant to firms using AI safely and responsibly as part of their business models. The FCA highlights the Senior Managers and Certification Regime (SM&CR) which emphasises senior management accountability and is relevant to the safe and responsible use of AI.
  • Contestability and redress. The FCA notes that, where a firm’s use of AI results in a breach of its rules (e.g., because an AI system produces decisions or outcomes which cause consumer harm), there is a range of mechanisms through which firms can be held accountable and through which consumers can get redress. These include recourse to the Financial Ombudsman Service.

The FCA’s Plans For The Next 12 Months

The FCA sets out seven priorities in the Update for the next 12 months.

  • Continuing to further understand AI deployment in UK financial markets. The FCA states that it is involved in diagnostic work and will re-run, with the Bank of England (BoE), the machine learning survey. It is collaborating with the Payment Services Regulator (PSR) on AI across systems areas.
  • Collaboration. The FCA will continue to collaborate closely with the BoE, with the PSR, and with other regulators through the Digital Regulation Co-operation Forum (DRCF) membership. It also involves close engagement with regulated firms, civil society, academia and I the FCA’s international peers.
  • International cooperation. Given recent developments (such as the AI Safety Summit and the G7 Leaders’ Statement on the Hiroshima AI Process and recent Ministerial Declaration), the FCA has further prioritised its international engagement on AI.
  • Building on existing foundations. The FCA may actively consider future adaptations to its regulatory framework, if necessary. The rapid rise of large language models (LLMs) makes regulatory regimes relating to operational resilience, outsourcing and CTPs even more central to its analysis. These regimes will have increasing relevance to firms’ safe and responsible use of AI.
  • Testing for beneficial AI. The FCA is working within the DRCF to deliver the pilot AI and digital hub. It is assessing opportunities to try new types of regulatory engagement and exploring changes to its innovation services that could enable the testing, design, governance and impact of AI technologies in UK financial markets within an AI sandbox.
  • FCA use of AI. The FCA plans to invest more into AI technologies to proactively monitor markets, including for market surveillance purposes. It is exploring potential further use cases involving natural language processing to aid triage decisions, assessing AI to generate synthetic data or using LLMs to analyse and summarise text.
  • Looking to the future. As part of its emerging technology research hub, the FCA takes a proactive approach to understanding emerging technologies and their potential impact. In 2024–25, the DRCF’s horizon scanning and emerging technologies’ workstream will conduct research on deepfakes and simulated content. Separately, the FCA has published a response to its call for input.

Continued Alignment With Emerging Trends

In our previous alert, we shared our thoughts on the emerging regulatory trends for regulating both generative AI and AI more generally. The Update continues to reinforce these:

  • No new risks? It is not clear that AI necessarily creates material new risks in the context of financial services, although the rapid rate of technological change may create new risk; it remains too early to tell. 
  • Amplifying existing risk. Instead, AI may amplify and accelerate the existing financial sector risks — i.e., those connected with financial stability, consumer, and market integrity, which the financial services and markets regime is designed to reduce. 
  • Accountability for regulators’ use of AI. AI will also have a role in the control by firms of financial sector risks and, indeed, in the FCA’s and PRA’s regulation of the sector (although questions may arise about the justification for AI-generated administrative decisions and their compliance with statutory and common law principles of good administration). The FCA discusses its use of AI in the Update. 
  • Sectoral rather than general regulation. In keeping with the concerns about amplifying and accelerating existing risks, it is appropriate for the FCA and PRA, as current financial sector regulators, to be charged with regulating AI. 
  • Where possible, use of existing standards. The FCA’s and PRA’s role in regulating AI reinforces the need for using and developing existing financial sector regulatory frameworks, enhancing continuity and legal certainty, and making proportionate regulation more likely (although not inevitable). The FCA’s focus in the Update on how its existing framework can be used to respond to firms’ adoption of AI is as such: AI may be new but regulatory obligations for using AI are already in place for firms. 
  • Governance is key. Effective governance of AI is needed to ensure that the AI is properly understood, not only by the technology experts who design it but also by the firms who use it — a “know-your-tech” (KYT) duty — and firms can respond effectively to avoid harm materialising from any amplified and accelerated risks. The SMCR, which is a highlight of the Update, should accommodate a KYT duty. 
  • A regulatory jurisdiction over unregulated provider of critical AI services seems inevitable. Staying with the theme of existing frameworks, the rise of the importance of technology and currently unregulated CTPs, noted above and specifically raised in the Update, has resulted in an extension of powers for the FCA and PRA under the recently enacted Financial Services and Markets Act 2023 (FSMA 2023), as noted in our recent alert Providing Critical Services to the UK Financial Sector: Important Draft Rules for Fintechs | Insights & Resources | Goodwin (goodwinlaw.com) and addressed on our dedicated microsite Financial Regulations for Critical Third-Party Technology Providers in the EU and UK | Goodwin (goodwinlaw.com). Providers of AI models that are used by many financial institutions — or by a small number of large or important financial institutions — may become subject to the jurisdiction of the FCA or PRA under the new powers that FSMA 2023 introduces. If there is a provider of generative AI models used by a large number of financial institutions or a small number of large or important financial institutions, that provider may become subject to the jurisdiction of the FCA or PRA under the new powers that FSMA 2023 introduces. The FCA will consult on its requirements for critical services providers later in 2024.

To discuss the contents of this alert, please contact the authors or your usual Goodwin contact.

GLOBAL SURVEY OF AI VENTURES: PLEASE TAKE PART!

Saïd Business School at Oxford University is partnering with NYU, Boston University, and HEC Paris on the 6th Annual Survey of AI Startups. AI products are now present across all sectors of the economy, and startups are helping to drive this progress! However, AI-focused entrepreneurs face many issues when developing their products, such as access to training data, compliance with ever-evolving regulations, and developing fairer and less biased algorithms. If you are an AI startup founder, please spend 12 minutes to respond to this survey, to help us learn more about how AI startups navigate uncharted waters. You will of course have early access to our results. If you are an investor and/or a mentor, please encourage your founder contacts to take part. Here’s the link to the survey https://hec.az1.qualtrics.com/jfe/form/SV_bsy881BdXeYKM3c

 

This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee a similar outcome.