Yesterday the European Commission (“Commission”) published a landmark “Proposal for a Regulation on a European Approach for Artificial Intelligence” (the “Proposal”), establishing a legal framework for so-called “high risk” applications of Artificial Intelligence (“AI”). The Proposal has been described as the “GDPR for AI” and will require businesses using or selling AI-enabled products or services in the European market to make significant changes.
The draft proposes limits on some uses, and an outright ban others. Limits would apply to the use of AI in a range of activities, including self-driving cars, hiring decisions, bank lending, school enrollment selections, exam scoring, by-law enforcement and matters relating to the court system. These areas are all considered “high risk” because inappropriate application of the technology could threaten people’s safety or fundamental rights.
The Proposal would ban live facial recognition in public spaces, though there would be several exemptions for national security and other purposes.
The Proposal follows a public consultation on the Commission’s white paper on AI published in February 2020. In parallel with the Proposal, the Commission simultaneously release a new Machinery Regulation, which is aimed at ensuring the safe integration of AI systems into machinery.
The publication comes at an important time for Canadians as recent Findings [1] by the Office of the Privacy Commissioner of Canada indicate a willingness to extend similar oversight into matters relating to artificial intelligence. Whether the Proposal will serve as a precedent for AI regulation in Canada remains to be seen, but provisions appear to be aligned with Canada’s approach thus far.
The Proposal follows a risk-based approach, differentiating between uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk.
Many of the concepts within the Proposal will likely shape future regulatory treatment of AI in Canada. The following summarizes key takeaways from the Proposal accordingly.
Scope
In an effort to “future proof” the legal framework, the Proposal focuses on the “concrete utilization of the technology” in the form of AI systems, rather than technology itself. Accordingly, the Proposal takes a “a balanced and proportionate” approach to AI that is “limited to the minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market”.
The Proposal sets a robust and flexible legal framework. On the one hand, it is comprehensive and future-proof in its fundamental regulatory choices, including the principle-based requirements with which that AI systems should comply. On the other hand, it puts in place a proportionate regulatory system centered on a well-defined risk-based regulatory approach and in which legal intervention is tailored to those concrete situations where justified.
The Proposal sets harmonized rules for the development, placement on the market and use of AI systems in the Union following a proportionate risk-based approach. It proposes a single future-proof definition of AI. Certain particularly harmful AI practices are prohibited as contravening Union values, while specific restrictions and safeguards are proposed in relation to certain uses of remote biometric identification systems for the purpose of law enforcement.
The proposed rules will be enforced through a governance system at Member States level, building on already existing structures, and a cooperation mechanism at Union level with the establishment of a European Artificial Intelligence Board.
Prohibited Artificial Intelligence Practices
Generally, the Proposal prohibits AI practices that have “a significant potential to manipulate persons through subliminal techniques beyond their consciousness” or “exploit vulnerabilities of specific vulnerable groups” in a manner likely to cause them harm. The proposal also prohibits AI-based “social scoring” for general purposes done by public authorities. Finally, the use of ‘real time’ remote biometric identification systems in publicly accessible spaces (essentially live facial recognition) for the purpose of law enforcement is also prohibited unless certain limited exceptions apply.
Also prohibited is the use of choice architecture or user interfaces to “cause a person to behave, form an opinion or make a decision to their detriment”. With the exception of social scoring (which is never acceptable), the Proposal states that prohibitions can be lifted when a practice is authorized by law in order to safeguard public security.
High Risk AI Systems
The Proposal establishes mandatory compliance requirements for AI systems that are “high-risk”. Although the Proposal does not define “high-risk”, it explains that the classification of an AI system as high-risk should be based on its “intended purpose”, which refers to the use for which an AI system is intended, including the specific context and conditions of use. The Proposal states that this can determined in two steps, by considering:
- Whether it [the AI system] may cause certain harms; and
- The severity of the possible harm and the probability of occurrence.
Harms that may be caused by high-risk AI systems include, but are not limited to: “injury or death, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on fundamental rights”.[2]
The severity of harm and probability of occurrence will be determined on the basis of set criteria. The criteria include “base-line criteria”, which must always be taken into account, and “additional criteria”, which are taken to account as appropriate.
The Proposal also contains a list of prescribed high-risk AI systems, and provisions stipulating when AI systems covered by other EU legislation will be considered high-risk. The list, found at Annex II of the Proposal, includes:
- recruitment systems;
- systems that provide access to educational or vocational training institutions;
- emergency service dispatch systems;
- creditworthiness assessment;
- systems involved in determining taxpayer-funded benefits allocation;
- decision-making systems applied around the prevention, detection and prosecution of crime; and
- decision-making systems used to assist judges.
So long as compliance requirements are met, such high-risk systems are not barred from the EU market under the current legislative framework.
Requirements for High Risk AI Systems
Parties intending to apply high-risk AI must ensure compliance with a series of requirements before, during, and after the AI system is brought to market. The requirements relate to the quality of data sets (to protect against bias), documentation and record keeping (to verify outputs), transparency (to facilitate understanding), human oversight (to minimize potential risks), and the robustness, accuracy and security of the AI system.
For AI systems intended to interact with natural persons, there is also a requirement to notify the individual when they are interacting with an AI system, and label AI outputs to discern its artificial origin, in certain circumstances.
Assessments Required for High Risk AI Systems Prior to EU Market Entry
In order to ensure a high level of trustworthiness for high-risk AI systems, those systems will be subject to a conformity assessment prior to their placing on the market or putting into service, and also on an ongoing basis in circumstances where the AI system changes[3]. The conformity assessment is designed to ensure that obligations under the Proposal are met.
The “provider of an AI system”[4] is primarily responsible for carrying out the conformity assessment through self-assessment or in accordance with relevant EU legislation. However, certain high-risk AI systems require designated “notified bodies”, appointed by member states, to conduct conformity assessments and issue a certificate approving the AI system for market. Before a high-risk AI system is placed on the market or put into service, the provider must register the system in an EU database[5] established by the Proposal. Once all requirements are met, a provider will be able to affix a “CE marking of conformity” on their AI system to signal compliance, allowing the AI system to move freely across Member States.
The Proposal acknowledges the complexity of the artificial intelligence value chain and establishes certain obligations for relevant third parties as well. Notably, this includes parties involved in sale and supply of software, software tools and components, and pre-trained models and data. In an effort to discern liability, the Proposal ultimately requires a “unique and identifiable economic operator”[6] to hold legal responsibility for the finished product; this actor is typically the provider, but in circumstances where a high-risk AI system is not placed on the market or put into service independently, the manufacturer may be responsible.
Remote Biometric Identification
The Proposal states that a “remote biometric identification system” refers to “an automated system for the purpose of the identification of persons at a distance on the basis of their biometric data. A person is identified when the template of their biometric data is matched with a template already stored in a reference database”.[7]
Considering the sensitive nature of biometric data, the Proposal makes such data subject to “special protection”. Accordingly, the Proposal prohibits the processing of biometric data unless a limited number of conditions apply.
If the requisite condition applies, not only is a conformity assessment required, but a Data Protection Impact Assessment[8] must be provided, and the use of remote biometric identification in publicly accessible spaces will be subject to an authorization procedure that addresses the specific risks implied by the use of the technology.
Governance – The European Artificial Intelligence Board
The Proposal establishes a European Artificial Intelligence Board composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor. The Board’s purpose is to facilitate the effective and harmonized implementation of the Proposal. It is also responsible for providing recommendations and opinions to the Commission regarding the list of prohibited AI practices, high-risk systems, and associated amendments. National supervisory authorities will be required to report to the Board on the outcomes of market surveillance activities in order to support the Board in fulfilling its responsibilities.
Enforcement
While providers of high-risk AI systems are responsible for compliance with all stipulations under the Proposal, enforcement is in the hands of Member States.
The maximum penalty for infringement is an administrative fine up to 30 million EUR, or 6% of the total worldwide annual turnover of the preceding financial year, whichever is higher.
Next steps
Nothing will happen immediately as the Proposal could take several years to move through the European Union policymaking process. However, as with the GDPR, the debate around this Proposal (and the subsequent final regulation itself), will very likely become the benchmark for countries attempting to similar legal frameworks.
For more information about Denton’s data expertise and how we can help, please see our Transformative Technologies and Data Strategy page and our unique Dentons Data suite of data solutions for every business, including enterprise privacy audits, privacy and technology program reviews and implementation, and training in respect of personal information. Subscribe and stay updated.
[1] February 2, 2021, “Joint investigation of Clearview AI, Inc. by the Office of the Privacy Commissioner of Canada, the Commission d’accès à l’information du Québec, the Information and Privacy Commissioner for British Columbia, and the Information Privacy Commissioner of Alberta”
[2] Preamble, para 41.
[3] This is particularly relevant to AI systems that ‘learn’ as they operate, and generate new functions accordingly
[4] Article 3 Section 1(2) states that “provider of an AI system” means a natural or legal person, public authority, agency or other body who develops an AI system or has it developed and places it on the market under its own name or trademark or puts it into service under its own name or trademark or for its own use, whether for payment or free of charge
[5] Unless the system is already covered by relevant legislation that provides for registration.
[6] In accordance with EU product legislation.
[7] Article 3 Section 1(30).
[8] Article 42, Section 3(a).