On 21 April 2021, the European Commission unveiled a proposal for an EU Artificial Intelligence Regulation (“Proposal”). The Proposal recognizes that AI offers significant benefits and opportunities for the EU market, but also reveals the concern European regulators have long held regarding the potential for unregulated AI to adversely affect European fundamental rights. Influenced by the core EU pillar of promoting technology that works for the people, the Commission suggests the Proposal offers a human-centric horizontal regulatory framework that is intended to encourage the development and uptake of AI that is secure, trustworthy and ethical, and protects fundamental Europeans rights.
With its extra-territorial effect and significant threshold fines, the Proposal will have important and far reaching implications for businesses that provide or use AI, especially for use cases that the Commission classifies as high-risk. Although the Proposal will take time to progress through the EU legislative process and some changes can be expected, all AI developers and other businesses using AI systems or AI output in the EU should start preparing to address these new requirements. At the same time, other jurisdictions, including the U.S., are also considering proposals that would regulate various aspects of AI or are extending the application of existing legal concepts to AI. As such, it remains imperative for businesses operating in this space to remain mindful of the rapidly evolving legislative landscape.
In this client alert, which is the first in our series on the Proposal, we provide an overview of the strategic imperatives and key aspects of the framework. In future installments, we will analyze specific details of the Proposal, its impact on different sectors, and track changes as it progresses into law.
The Proposal is the culmination of industry analysis and consultation by the Commission over a two year period, including a White Paper on Artificial Intelligence published in February 2020. The Commission highlights the need for harmonized legislation to promote the development and adoption of AI technology across the EU, while addressing the risks associated with certain uses of the technology, such as the lack of transparency, and the risk of bias and discrimination.
The Proposal complements existing EU legislation governing certain aspects of AI (such as product liability laws, the General Data Protection Regulation (GDPR) and consumer protection laws) with a set of harmonized rules, issued in the form of a “Regulation,” directly applicable in EU member states. The Commission simultaneously proposed a new Machinery Regulation, designed to ensure the safe integration of AI systems into machinery.
The Proposal is centered on a risk-based regulatory approach that is tailored to the degree of risk associated with a particular use of AI. It also aims at being future-proof in its fundamental regulatory choices, including by setting forth principle-based requirements that AI systems should follow. Rather predictably, the Proposal has been criticized by civil liberties and consumer rights groups for providing too many broad exceptions to problematic AI, whilst also attracting the ire of technology companies for creating regulatory red tape.
The Big Points You Need To Know
What is an AI system? An AI system is defined in the Proposal as software that (i) is developed with one or more listed techniques and approaches (including machine-learning and deep learning approaches, logic and knowledge-based approaches and statistical approaches), and (ii) can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments with which such software interacts. Here the Commission seeks to balance legal certainty with the need for flexibility to ensure the Proposal is future-proof. As such the listed techniques and approaches will be updated to reflect market and technological developments.
Who is subject to the proposal? The Proposal applies to “providers” and “users” of AI systems, with supporting obligations placed on importers and distributers throughout the AI chain. A “provider” is a natural or legal person (including public authorities) that develops (or has developed) an AI system with a view to putting it on the market or into service under the provider’s own brand, whether or not for any financial remuneration. The focus on the provider’s intentions to put an AI system on the market or into service are designed to ensure the Proposal is taken into consideration during the development stage. A “user” is a lawful user of an AI system, other than for “personal, non-professional activity,” so business customers using AI systems would typically be “users,” whereas consumers using AI for personal reasons would not.
Territorial scope. The Proposal continues the EU’s propensity for laws with extra-territorial application to ensure a level playing field and effective protection for EU citizens. The Proposal applies to (i) providers of AI systems (in and outside the EU) who place AI systems on the EU market, or put them into service in the EU, (ii) users of AI systems located within the EU, and (iii) providers and users of AI systems that are located outside the EU, where the AI system’s output is used in the EU. The latter is particularly notable as it can apply where the AI system is neither available nor used in the EU.
Risk-based approach. At the heart of the Proposal is a risk based approach, specifying four levels of risk: unacceptable, high, limited, and minimal. The majority of the Proposal is dedicated to AI systems identified as “high-risk” to the health and safety or fundamental rights of individuals. The Commission anticipates that the majority of AI systems will fall into the limited or minimal risk categories. Two types of high-risk AI systems are created:
(i) AI systems to be used as a product or safety component of a product that is covered by certain listed legislation. These products are captured by existing EU legislation that requires a third-party conformity assessment to be carried out (such as medical devices, machinery, and children’s toys). So, to avoid duplication and regulatory burden, the AI systems will be checked for compliance with the Proposal as part of the broader product conformity assessment; and
(ii) Standalone AI systems in specified areas listed in the Proposal, which includes: (i) remote biometric identification of individuals, (ii) educational or vocational training (e.g., scoring of exams), (iii) recruitment, promotion and termination of employees, (iv) essential private and public services, such as evaluating creditworthiness; and (v) critical infrastructures, such as transport. The Commission will have the power to expand this list as AI technologies evolve.
Conformity assessment. Grounded in existing EU product safety legislation, the Proposal requires that a high-risk AI system meets certain principle-based requirements. These requirements include the quality of data sets used; technical documentation and record keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity. Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment that will require them to demonstrate that their system complies with the mandatory requirements for trustworthy AI. AI systems will have to bear a ‘CE marking’ to certify compliance.
Quality management system, record keeping and traceability. The Proposal introduces a requirement for providers of high-risk AI systems to implement a quality management system that is documented through policies, procedures, and instructions. When combined with the requirements for technical documentation and extensive record-keeping requirements, including to maintain logs generated by the AI system to ensure that its functioning can be traced and monitored, the Proposal appears to create quite significant administrative responsibilities on these providers. In case of a breach, national authorities will have access to the information needed to investigate whether the use of the AI system complied with the law. Strikingly, and undoubtably a point that may alarm AI system developers, national authorities will have full access not only to training, validation and testing datasets, but also to source code where necessary to assess the conformity of the high-risk AI system with the Proposal’s requirements.
Transparency. Transparency and accountability are key themes for EU principle based legislation and that continues here. Certain information must be disclosed to individuals where an AI system interacts with humans, is used to detect emotions or determine association with (social) categories based on biometric data, or generates or manipulates content (‘deep fakes’).
Incident reporting. Providers of high-risk AI systems must report certain ‘serious’ incidents and any malfunctioning of the AI system which constitutes a breach of obligations under EU law intended to protect fundamental rights to the competent authority immediately and no later than 15 days after becoming aware of the incident. There is significant uncertainty around this reporting obligation based on current drafting, but the Commission is required to develop further guidance.
Registration. The Commission intends to create a public EU database for stand-alone high-risk AI systems. The intention of the database is to allow the authorities, users and other interested people to verify if the high-risk AI system complies with the Proposal’s requirements. AI providers are obliged to register their AI system before it is provided to the EU market and to provide “meaningful information” about their AI system once the conformity assessment has been carried out.
Importers, distributors, and users. Certain requirements in the Proposal also apply to importers, distributors, and users of AI systems. For example, distributors and importers have various verification obligations before making a high-risk AI system available on the market. Users also have direct obligations, including to use an AI system in accordance with instructions and maintain log records (similar to providers). Importers, distributors, and users can be treated as a provider in certain circumstances (e.g., if they license an AI system under their own brand or modify an AI system) and they will assume the relevant requirements as a result.
Authorized representative. Any provider that provides an AI system directly to EU customers (without an importer) will need to appoint an EU representative. For non-EU business subject to the GDPR, the EU representative concept will be familiar and we may see data protection EU representatives expand their remit.
Other AI systems. Unacceptable risk AI systems are prohibited (those specified systems that pose an unacceptable risk to fundamental rights — e.g., those that manipulate human behavior or exploit vulnerabilities of special groups of people). Low risk AI systems, such as chatbots and the use of emotion recognition systems or biometric categorization systems, are subject to limited transparency obligations, in particular informing users that AI is used unless this is obvious from the circumstances and the context of use. Minimal risk AI systems, such as AI-enabled video games and spam filters, are not regulated. Providers of AI systems that are not high-risk may choose to adhere to voluntary codes of conduct for trustworthy AI.
Oversight and enforcement. The Proposal creates a new European Artificial Intelligence Board composed of representatives from the Commission and EU Member States, whose role will be to facilitate the effective and harmonized implementation of the Proposal, including driving the development of standards for AI. EU Member States will be required to appoint one or more national authorities responsible for supervising compliance with the Proposal.
Fines. Penalties are delegated to EU member states, but the Proposal specifies mandatory threshold administrative fines for certain violations. Most breaches are subject to an upper limit of €20 million or 4% of global annual turnover/revenue (whichever is greater). Fines of up to €30 million or 6% of global annual turnover/revenue (whichever is the higher) apply to breaches of data governance and management practices and breach of the prohibition on unacceptable risk AI systems.
The Commission anticipates that this initiative may help shape global norms and standards in a manner that is consistent with EU values and interests, in much the same way as the EU has driven the global data protection agenda with the GDPR. U.S. technology giants are already voicing concern with the Proposal, perhaps mindful that, once implemented, it will not only impact their business in the EU, but will inevitably influence policy and law in the U.S. and elsewhere. The EU considers the Proposal to reflect a de minimis approach to regulation and believes it is not unduly constraining to technological development nor disproportionately increasing of the cost of developing and commercializing AI solutions. At the same time, efforts to regulate AI are also underway in the U.S. At the federal level between the middle of 2019 through 2020, more than 30 bills to regulate AI were introduced. At the state level, in 2021 bills or resolutions on AI were introduced in at least 16 states. In addition, several states have enacted biometric privacy laws and other states, and even some cities, have introduced AI-related bills that address biometrics, facial identification, and similar issues. It seems likely that the introduction of the Proposal will lead to new focus on these efforts in the U.S. and other jurisdictions.
Before the Proposal becomes law, it must be approved by both the European Parliament and the Council, and this process is usually lengthy and involved. Some of the language in the Proposal is vague and open to interpretation, and several requirements will require further clarity. However, the legislative process is an important opportunity for impacted businesses to educate the EU institutions and encourage an outcome that is aligned with the digital ecosystem “on the ground.”
A document from the Commission suggests that the Proposal could enter into force in late 2022 in a transitional period. During the transitional period, the requirements would be developed and implemented, with the Proposal becoming applicable during the second half of 2024, at the earliest. However, this suggested timeline may prove to be overly optimistic. We expect approval to take at least a couple of years, although the Commission has indicated that it hoped the other EU institutions would engage immediately in discussions. Once adopted, there will be a period of time for businesses to adapt their practices to be compliant with the new provisions — the Proposal indicates 24 months, although this may change during the legislative process.