Human-Centric Artificial Intelligence for Europe

Several key documents on the future of artificial intelligence (AI) in Europe were published ahead of the Digital Day in Brussels.

A first draft of the ethics guidelines for trustworthy AI elaborated by the High-Level Expert Group (HLG) on AI was published in December 2018. Based on the first call for feedback, wherein more than 500 comments were received until 18 January 2019, the guidelines were further refined. The report acknowledges the potential risks and negative impacts of AI that are often difficult to anticipate, identify or measure (as e.g. impacts on democracy, the rule of law or on the human mind itself). Hence, all AI systems to be developed, deployed and used must adhere to basic ethical principles: respect for human autonomy, prevention of harm, fairness and explainability. Seven key requirements were defined that any trustworthy AI system must fulfil.

  •          Human agency and oversight
  •          Technical robustness and safety
  •          Privacy and data governance
  •          Transparency
  •          Diversity, non-discrimination and fairness
  •          Societal and environmental well-being
  •          Accountability

An assessment list integrated into the report should help to verify whether the AI systems fulfil these conditions. A second feedback round on how to further improve the assessment list will follow in a piloting phase. This piloting process will be kicked-off in summer 2019, but interested stakeholders can already express their interest to participate and register. In addition, a forum was set up to exchanges best practices. In early 2020, the HLG will review the assessment list based on the input received. This shall feed into the next steps taken by the European Commission (EC).

The HLG on AI, moreover, worked on a definition of AI to clarify certain aspects of AI as a scientific discipline and as a technology respectively. These details shall facilitate the discussion on the AI ethics guidelines and on any other AI policy recommendations. The definition put forward previously by the EC in its Communication on Artificial Intelligence for Europe, is limited to AI systems and does not specify how AI as a scientific discipline may comprise approaches and techniques for machine reasoning, machine learning and robotics.

Alongside the reports by the HLG on AI, the EC published a Communication on human-centric AI. The Communication illustrates why the EC commissioned the HLG to develop ethics guidelines building on the regulatory framework. Following the EC, these shall be applied by all developers, suppliers and users of AI in the internal market. The Communication emphasises that AI is not an end in itself but a tool operating in the service of humanity and the public good. AI trustworthiness is seen as a pre-requisite for people to reap the benefits of the technology. The next steps for the third quarter of 2019 include the launch of a set of networks of AI research excellence centres under Horizon 2020. These centres shall e.g. focus on scientific or technological major challenges such as explainability and advanced human-machine interaction. Upcoming calls will fund Digital Innovation Hubs focusing on AI in manufacturing and on Big Data. Furthermore, the EC will start preparatory discussions to develop and implement a model for data sharing and making best use of common data spaces. These common data spaces shall play a major role in the framework of the Digital Europe programme under the next MFF.