A European Regulation on Artificial Intelligence

The European Commission unveiled its long awaited regulation proposal on Artificial Intelligence. What is in this first of a kind legislation project?

For the last two years, the European Commission (EC) has worked toward bringing forward a European regulatory framework for Artificial Intelligence (AI). The objective of the EC is to frame and supervise AI systems, and prevent the use of AI applications violating fundamental rights. Because of that, researchers, policy makers, industrials and human rights experts have been extensively debating the potential content of the regulation over the last months.

On 21 April 2021, the EC finally unveiled its Proposal for a Regulation on a European Approach for Artificial Intelligence, as a follow-up to the last year’s White Paper on AI (see SwissCore article). The proposal is part of a broader legislative package that will cover several aspects of AI systems. The EC will update the 2018 Coordinated Plan on AI and continue the collaboration with Member States for its implementation. It will also present a revision of the Machinery Directive later this year and future initiatives that address liability issues related to new technologies, including AI systems. Here are the main concepts included in this legislation proposal.

Firstly, the regulation provides definitions of AI and its risks. It aims at defining AI in a manner as technologically neutral as possible. As such, an AI system is a software developed with one of the technologies given in Annex I of the regulation, which can generate content, predictions, recommendations or decisions influencing the environments it interacts with. The regulation distinguishes four levels of risks for AI systems: (i.) unacceptable risk, for systems that may violate EU fundamental rights, (ii.) high-risk, for systems that may affect people’s safety, (iii.) limited risk and (iv.) minimal risk. The regulation will apply to any AI system that affects a person located in the EU, even if the provider comes from a third country. As such, it may have a broad impact,  similar to the application of the General Data Protection Regulation (GDPR). The regulation proposes to establish the European Artificial Intelligence Board, which will supervise the application of the regulation and provide recommendations on the lists of prohibited and high-risk AI systems.

The most important part of the legislation is the prohibition of some AI systems and the classification of others as high-risk. AI systems, which are considered to violate fundamental rights should be strictly prohibited. These include AI systems that manipulate behaviour, target people’s vulnerability, social scoring and ‘real-time’ remote biometric identification. Real-time biometric identification may be used for law enforcement, in very specific situations like the search of potential victims of crimes or the prevention of terrorist attacks. For high-risk AI systems, the risk of an application is defined according to its intended purpose, so that it is not tied to a specific technology and is as broad as possible. High-risk applications include AI systems used as safety components and systems that may significantly impact a person’s life, like systems used to prioritise the dispatching of emergency first response services. A full list has been established, and can be found in Annex III of the proposed regulation. The EC, following a defined procedure involving the European Artificial Intelligence Board, can update this list. For high-risk systems, a set of requirements on the quality of data sets, documentation and record keeping, transparency and provision of information to users, human oversight, robustness and accuracy for high-risk AI systems is detailed within the regulation. Other obligations are defined for all actors in the value chain. These systems will have to undergo conformity assessments. In parallel, the EC will set up and maintain an EU database of high-risk AI applications that have been put on the market.

An obligation of transparency will also apply for limited-risk AI systems. This includes notifying persons that they are interacting with an AI system for example. Minimal risk AI systems will not be subject to this obligation, but may provide voluntary codes of conduct and adhere to industry associations’ codes of conducts, ensuring that their application is trustworthy. To facilitate prosperous innovation, researchers and innovators shall be able to experiment under relaxed regulation in controlled environments and under supervision. The European Digital Innovation Hubs (eDIHs) and the Testing and Experimentation Facilities (TEFs) as established within the Digital Europe Programme (DEP) may play a role in this process. A set of measures also aims at reducing the regulatory burden for SMEs and start-ups. The enforcement of the regulation is delegated to the Member States, and non-respect of the regulation can lead to fines of up to €30M, or up to 6% of the total worldwide annual turnover of the preceding financial year of an infringing company, whichever is higher.

Even if the regulation is still at a proposition stage, it represents an important step for AI technologies. The fact that the EC wants to ban certain AI practices strictly sends a strong message, especially given the wide development of mass surveillance and social scoring technologies in countries like China. However, before the regulation is formally adopted and put into force, work remains to be done. The European Parliament (EP) and the Council will each discuss the text and the standard legislative procedure will unfold. It is expected that the EP will push for harder restrictions and less exceptions, especially regarding facial recognition, biometric surveillance and predictive policing. On top of that, some Members of the EP have already expressed concerns about the fact that some AI systems will only be subject to self-assessment. On the Council side, the proposal should also divide, as Member States have already expressed very different opinions on the AI regulation. Despite this, the regulation proposal remains first of its kind worldwide and will most likely have a broad impact, guiding the way to international collaboration on AI regulation.