Proposal for a Horizontal, Risk-Based Regulation of Artificial Intelligence (AI) in the EU
Artificial intelligence (AI) is rapidly gaining a foothold. Each of us already encounters AI in numerous applications – even if we are not always aware of it – and it is credited with having high innovative potential for our society. With a proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, the European Commission presented, at the end of April 2021, the first legal framework for AI worldwide and thus began an ambitious project. It is intended not only to regulate an important technological development of our time, but also to support its acceptance. With this proposal, the Commission would like to promote the potential benefits of AI for European society, strengthen trust in AI and minimize its risks.
The proposed Artificial Intelligence Act regulates not only the technology itself but also the use of AI. Accordingly, the term AI systems is very broadly defined and encompasses as many types of self-learning, self-changing and human-intelligence-simulating algorithms, generally understood as AI, as possible.
Depending on the application and use of the AI system, the proposal defines various risk categories and lists corresponding obligations.
Prohibited AI practices (Art. 5)
According to the European Commission, an unacceptable risk exists when AI practices contravene EU values and pose a clear threat to people’s safety, livelihoods and fundamental rights. Thus, the evaluation of social behaviour by authorities (social scoring), the exploitation of vulnerabilities of children or vulnerable groups, the use of techniques that have a significant potential to manipulate persons through subliminal techniques, and most ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement are to be prohibited.
High-risk AI practices (Art. 6-51)
A significant part of the proposal refers to AI practices involving high risk. These include, for example, the use of AI systems in the management and operation of critical infrastructures, in education and vocational training (e.g. for determining access or for assessment), in safety components of products, e.g. robot-assisted surgery, in the recruitment of employees, in personnel management and decisions on access to self-employment (e.g. assessment of CVs) or in the evaluation of creditworthiness. In addition, the use of AI practices in the field of law enforcement, migration and border control, as well as in the administration of justice and in democratic processes, is considered high-risk.
AI systems for such practices therefore have to fulfil strict requirements prior to being admitted to the market. The draft regulation requires that such systems maintain a high level of robustness, accuracy and security, in particular cybersecurity (Art. 15). Moreover, appropriate risk-evaluation and risk-mitigation systems have to be ensured, as well as high quality of the data sets used, technical documentation of processes and traceability, record-keeping, transparency, the provision of information to users, and human oversight, in order to minimize the risks.
Human oversight, in particular, is an exciting topic that raises many practical and legal questions. The individuals to whom human oversight is assigned have to be able to “fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation” as well as to “be able to intervene on the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure” (Art. 14 para.4). In addition to conformity assessment procedures, the system also has to be registered in the relevant EU database.
AI practices with medium risk (Art. 52)
For medium-risk AI systems, special transparency obligations are proposed. This applies, for instance, to chatbots or “deep fakes”. Here it is important to the Commission that users be made aware that they are communicating and interacting with a machine.
AI practices with low risk (Art. 69)
AI practices without any particular risk, such as AI-supported video games or spam filters, are to be freely usable. The proposal only provides a framework for the creation of codes of conduct, the aim being to encourage providers of low-risk AI systems to implement such codes on a purely voluntary basis. However, general legal framework conditions, in particular those of the General Data Protection Regulation, of course apply also to low-risk AI practices.
In principle, this legal framework is to be applicable for both public and private actors inside and outside of the European Union; in this context, the proposal distinguishes between providers and users of AI systems. However, for actors outside of the EU, it is only to be applicable if the AI system is placed on the Union market, its use affects people in the Union, or the output produced by the system is used in the Union.
Similarly to the General Data Protection Regulation, this proposed regulation also includes a list of high fines for infringements. The use of a prohibited AI system is to be subject to fines of up to EUR 30 million or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year, whichever is higher; non-compliance with other requirements applicable to AI systems is to be subject to fines of up to EUR 20 million or up to 4 % of the total worldwide annual turnover for the preceding financial year. Finally, the supply of incorrect, incomplete or misleading information to competent authorities is to be sanctioned with a fine of up to EUR 10 million or up to 2 % of the total worldwide annual turnover for the preceding financial year.
It remains to be seen what the reactions to this proposal for an EU regulation will be. We can expect interesting and exciting discussions.
Please note: This blog merely provides general information and does not constitute legal advice of any kind from Binder Grösswang Rechtsanwälte GmbH. The blog cannot replace individual legal consultation. Binder Grösswang Rechtsanwälte GmbH assumes no liability whatsoever for the content and correctness of the blog.