The European Artificial Intelligence Act steps ahead

Trilogue negotiations on the AI Act started – and it seems like Research and Innovation will get a good deal.

Back in April 2021, the European Commission (EC) published a proposal for a new Artificial Intelligence (AI) Act, based on the Impact Assessment of the Regulation on AI. The proposal wants to enshrine a technology-neutral definition of AI systems in EU law. This should increase the trustworthiness of AI used by Europeans and make Europe a world leader in the field, as the AI Act is the first legislative project globally. The core of the Act is a four-level risk-based approach setting different obligations depending on the potential harm an AI system could cause. The horizontal legal framework should facilitate the deployment of trustworthy AI applications, promote investment and innovation and enhance the governance and effective enforcement of existing law in the context of AI.

On 15 June 2023, the legislative train reached its next stop; with the adoption of their negotiation position, the European Parliament opened the way for the trilogue discussions with the European Council, which started the same day. It is foreseen that these will be finalised by the end of the year to kick off the implementation phase in early 2024. As it is a completely new regulatory process, it will probably need two to three years until the AI Act is fully applicable. Until then a voluntary code of conduct, planned to be jointly developed by the EU and the US should bridge the missing regulatory frame.

The European Parliament as well as the European Council brought forward various critiques and adaptations they want to see in the regulation. The most prominent by the Parliament is definitely the full prohibition of predictive policing systems and biometric identification systems in public spaces, having only exemptions for the latter one in case law enforcement authorities are forced to use them to allow prosecution of serious crimes after judicial authorisation. The Parliament also wants to see a full ban on emotion recognition systems in law enforcement, border management, workplaces or educational institutions as well as consider AI applications which could be harmful to health, safety, fundamental rights or the environment, including these influencing voters and the outcomes of elections or recommender systems of social media platforms with more than 45 million users as high-risk AI. Furthermore, the Parliament asks for the obligation to disclose all AI-generated outputs.

Also, the European Council, which adopted its opinion already in December 2022, has a set of adaptions they want to see in the regulation. At least in the core points it matches with the changes the Parliaments want to be introduced as well. While clarifying and narrowing down definitions throughout the whole proposal, the Council wants to set some requirements planned only for high-risk AI and also for general-purpose AI. Moreover, the Council wants to simplify the compliance framework and the provisions for market surveillance while strengthening the autonomy of the AI board.

Both institutions want to keep Research and open-source AI outside the scope of the AI Act and thereby safeguarding AI systems and their advancement solely developed for scientific research, not to stifle the advances in academia. It is tried to circumvent introducing a barrier to development and innovation. This is also a reason why the Parliamentarians want to invest more in AI research and foster joint developments of AI developers, academics, experts in inequality and non-discrimination research and further society. Meanwhile, the Council asks to increase the support for innovation and have more proportionate caps on administrative fines for SMEs and StartUps. Both the Council and the Parliament support the introduction of so-called regulatory sandboxes, which should allow the testing of innovative AI systems in real-world conditions before it is deployed.

Researchers and especially innovation actors, do not fully agree if the regulation will be beneficial or detrimental. Particularly work on applications in the field of biometric or emotion recognition which somewhen could head for the market would be affected if they are foreseen to be used in public spaces. This surely affects in these cases also third-party funding for institutions and companies. Nevertheless, for applications developed for controlled settings, with the informed consent of the users evaluated by AI, no boundaries seem to be introduced by the regulation at all. What even could lead to more confidence and trust in the used systems as there is more transparency.

Overall, the AI Act wants to set up a customer and society-friendly environment and therefore create the first regulatory framework for Artificial Intelligence. The voices are controversial regarding the Act, for some it is not fast or strict enough for others it is too far-reaching. Having in mind the warning by leading AI experts, including the CEO of OpenAI or Googles Deepmind AI division head and the open letter from scientists and executives in AI industry calling for a six months break in research and a moratorium, the AI Act seems to come at the right time.