Alert
May 23, 2023

EU AI Act: Foundation Models and Almost at the Finish Line

Background

On 11 May 2023, the European Union’s landmark AI Act took a giant leap forward as the EU Parliament's Internal Market Committee and the Civil Liberties Committee (EU Parliament) approved their compromise text. The draft AI Act — which brings AI systems within the EU’s product liability framework — was first introduced in April 2021, headlining as the first comprehensive regulatory scheme for artificial intelligence (read our summary here). It has been undergoing revisions and significant debate since then.

The compromise text approved by the EU Parliament departs quite significantly from the draft submitted by the Council of Europe at the end of 2022 in a number of key areas, with concessions made on several sensitive and politicised topics. The changes are intended to guarantee the protection of EU fundamental rights, health and safety, environment, democracy, and rule of law. We have listed the changes we found most interesting.

1. AI Act Expanded to Capture Foundation Models

Since the Council of Europe presented its draft at the end of 2022, ChatGPT has catapulted generative AI into the mainstream, igniting excitement over the enormous opportunities on offer and creating significant challenges for lawmakers. In response, the European Commission (the Commission) has rapidly recalibrated its original “risk-based” philosophy that sought to avoid regulating AI technology per se.

In a significant new revision, the draft now targets foundation models that are trained on broad datasets at scale and can be reused in countless downstream AI or general-purpose AI systems. The move to bring foundation models, however distributed and whether stand-alone or integrated, within the scope of the AI Act significantly expands the reach of the legislation.

Providers of foundation models used as generative AI are tasked with informing individuals that they are interacting with an AI system, and those providers must also train, design, and develop their foundation models with adequate safeguards against breaching EU law. The Commission acknowledges the significant risks generative AI poses to copyright rules and requires generative AI providers to publicise a summary of their use of training data protected under copyright law — a potential gift to rights holders concerned that generative AI trains on their proprietary materials and data.

See our content relating to AI and machine learning here on our microsite.

2. New AI Definition

The EU Parliament overhauled the definition of artificial intelligence to align more closely with the definition developed by the OECD in line with the notion that the concept of ‘AI systems’ should be closely aligned with the work of international organisations working on AI to ensure legal certainty, harmonization, and wide acceptance. The new proposed definition reads: “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.” The focus is on machine learning as opposed to automated decision making, which may not survive the next stage in the process. All businesses should note that an AI system integrated into a larger system will create a single AI system subject to the AI Act if the larger system cannot function without the AI component.

3. New Harm Assessment for High-Risk AI

The substantive obligations under the AI Act apply to high-risk AI systems. High-risk AI systems are defined to capture (a) AI systems required to undergo an EU conformity assessment process, or systems that are safety components of a product required to undergo such an assessment; and (b) AI systems that fall into one of the specified high-risk use cases. These definitions have been criticised on the basis they do not consider those actually affected by the technology, and could result in AI systems being classified as high risk when they pose limited, if any, risk to fundamental rights. To address this, the compromise text introduces a welcome horizontal layer that references harm to people’s health and safety or fundamental rights and, where the AI system is used as a safety component of a critical infrastructure, to the environment. This introduces a more robust hurdle for classification and should ensure the AI Act focuses on its stated aims.

The classification of high-risk use cases has also been expanded to include AI systems that influence voters in political campaigns and, in a move that adds to the raft of EU legislation recently thrown at very large online platforms (those with more than 45 million users), AI systems used by very large online platforms in recommender systems are now included in the list.

4. Expansion of Prohibited AI Practices

The compromise text reflects several additions to the list of prohibited AI practices that focus on intrusive and discriminatory uses of AI systems. Indiscriminate facial recognition proved to be a hot-button issue for the EU Parliament, and the compromise text now notably features a blanket ban on remote biometric identification in public venues, whether in real time or after the fact (with a narrow exception for law enforcement). Other new prohibited practices include biometric categorisation systems using sensitive characteristics; predictive policing AI systems; emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

5. Introducing Ethics

The EU Parliament has embedded into this compromise draft certain ethical principles that were entrenched in the Commission’s 2022 declaration on digital rights, which was made to ensure that people are placed at the centre of a digital transformation, and that their freedoms, security, safety and participation remain assured. All persons within scope of the AI Act are required to exert best effort to ensure their AI systems achieve certain ethical principles that promote a human-centric European approach to ethical and trustworthy AI. These principles include the protection of fundamental rights, human agency and oversight, technical robustness and safety, privacy and data governance, transparency, nondiscrimination and fairness, and societal and environmental well-being.

The principles-based approach to AI regulation reflected in these requirements is more closely aligned to the approach the United Kingdom and United States are moving towards. Read our articles on the US approach to AI regulation here and UK approach to AI regulation here.

6. New Contractual Controls Protecting SMEs

Good news for SMEs and start-ups. The compromise text proposes to deem unenforceable any unfair contract terms regulating the supply of tools, services, components, or processes that are used or integrated in a high-risk AI system or the remedies for the breach or the termination of related obligations when those terms have been unilaterally imposed on them (for example, because there is no opportunity to negotiate terms).

7. Penalties

The highest level of penalties — levied for breach of prohibited AI practices — has increased markedly, from EUR 30 million or 6% of global annual turnover to up to the higher of EUR 40 million or 7% of global annual turnover. Any failure to meet data governance and high-risk AI transparency obligations attracts penalties of up to the higher of EUR 20 million or 4% of global turnover.Most other obligations are subject to penalties of up to the higher of EUR 10 million or 2% of global turnover.

In a move that will dismay those with the bargaining power to shift risk along the supply chain, the draft prohibits parties from agreeing to contractual arrangements that seek to reallocate liability under the AI Act for penalties, associated litigation costs, and indemnity claims.

8. Right to Complain and Receive Explanation of High-Risk AI Decisions

Individual and legal persons will be entitled to report breaches of the AI Act to a supervisory authority and will have a right to judicial remedy for any legally binding decision of a supervisory authority that concerns them, or where the supervisory authority does not handle their complaint or does not inform them of the progress or outcome of the complaint.

Individuals affected by a decision made by high-risk AI systems that has a notable effect on their rights are entitled to obtain an explanation of the role of the AI system in the decision-making procedure, the main parameters of the decision taken, and the related input data.

However, these rights are only part of the puzzle. The draft AI Liability Directive and revisions to the existing EU Product Liability Directive, which are tracking through the legislative process separately, will bring AI systems regulated by the AI Act within the EU’s product liability regime, giving redress to consumers who suffer certain types of harm caused by AI systems.

Next Steps

The compromise text of the AI Act is due to be voted on by the European Parliament in June. Subsequently, the European Parliament, Commission, and Council will commence negotiations to progress the new law to its final form, a process known as the trilogue negotiations. If everything goes according to plan, the AI Act may be officially adopted by the start of 2024.

The AI Act is hugely complex legislation that will prove highly challenging to implement. All businesses that provide, deploy, operate, or distribute AI systems that touch the EU market should be starting to consider undertaking a risk assessment to determine if they will be subject to the AI Act. Remember that the AI Act has extraterritorial effect, applying to providers of AI systems placed in the EU market irrespective of whether they are established in EU, and to providers and deployers of AI systems that are established outside the EU if the output produced by the AI systems is used in the EU.

 

This alert was published with assistance from Harriet Worthington.