Agreement reached on Artificial Intelligence Act

The provisional agreement on the EU’s AI Act now includes obligations for foundation models, while foreseeing dedicated measures to foster innovation.

The trilogue negotiations on the EU’s harmonised rules on Artificial Intelligence (AI Act) were concluded on 9 December 2023, when the European Parliament negotiators and the Council Presidency reached a provisional political agreement. The new rules will be applied in the same way in all Member States. The compromise agreement includes a definition of AI systems, distinguishing them from simpler software systems, which aligns with the approach proposed by the Organisation for Economic Co-operation and Development (OECD).

Following a risk-based approach, most AI systems fall within the minimal risk category, not having any obligations imposed on them through this regulation. Companies operating these systems may still voluntarily commit to additional codes of conduct.

AI systems considered high-risk, on the other hand, are subject to several requirements, including risk-management systems, ensuring a high quality of data sets, human oversight, and transparency obligations. Exemplary, high-risk AI systems include those used in certain critical infrastructures such as water, electricity, and medical devices. Regarding education, AI systems applied in connection with admission and assessment procedures in educational institutions are considered high-risk.

AI systems imposing an unacceptable risk – constituting a clear threat to people’s fundamental rights – will be banned. The list of prohibited AI systems has been extended in the provisional agreement, although foreseeing some exceptions in certain cases of law enforcement. Further, the AI Act now ensures better protection of human rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment.

What has changed substantially from the Commission’s initial proposal of the AI Act is that the provisional agreement now also includes rules on general-purpose AI systems and foundation models. The term refers to large artificial intelligence models trained with massive datasets and serving as the starting point for various downstream natural language processing tasks. General-purpose AI, on the other side, is a type of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks in a manner comparable to human intelligence. Among others, the most popular foundation model – open AI’s ChatGPT – and general-purpose AI Systems will be subject to transparency rules under the AI Act. Moreover, general-purpose AI models potentially posing a systemic risk due to their large size will have to comply with additional obligations, for example, monitoring serious incidents and performing a model evaluation.

The final negotiations on regulating general-purpose AI and foundation models were influenced by rising voices both from industry and some Member States to limit the scope of foundation models to transparency standards for the benefit of innovation. The tech industry’s reaction to the political agreement is rather ambivalent, partly criticising the ‘last-minute’ attempt to regulate foundation models while also acknowledging the AI Act’s risk-based approach.

The agreement provides that the regulation would not apply to AI systems used for the sole purpose of research and innovation. Further, it includes a list of actions to support small businesses with the additional administrative burden. The AI Act also foresees regulatory sandboxes for developing and testing innovative AI systems in real-world conditions. Furthermore, the European Commission recently published a policy brief on the role and potential of AI in science and innovation. The policy brief builds on existing European research, innovation, and AI policies, most prominently the AI Act.

Work on the AI Act will now continue on the technical level in the coming weeks. The compromise text will then have to be formally adopted by the co-legislators, entering into force 20 days after publication in the EU’s Official Journal. The AI Act would then become applicable two years after it enters into force, with some exceptional provisions already becoming applicable earlier. The EC plans to launch an AI Pact to bridge the period before the regulation becomes generally applicable. The AI Pact will convene AI developers who commit voluntarily to implement key obligations of the AI Act ahead of the legal deadlines.