The challenge to regulate AI in an ethical way

The discussion on ethical AI is focussing on regulation. A briefing by the EP Think Tank and a study by ETH Zurich highlight challenges and priorities.

The High Level Expert Group on Artificial Intelligence (AI) of the European Union (EU) published its first Ethical Guidelines for Trustworthy AI in April 2019 (see SwissCore article). For these guidelines, the group has chosen a ‘human-centric’ approach to respect EU values and principles. Since then, the discussion on AI and its societal impact has been increasingly focussed on the need for regulation, and recently, the European Commission (EC) President-elect Ursula von der Leyen announced further EC legislative proposals for a coordinated European approach to the human and ethical implications of AI.

On 19 September, the European Parliament (EP) Think Tank released a new briefing to report on the context and implementation of EU guidelines on ethics in AI. The paper explains the steps taken during the past two years and sheds light on ethical rules recommended for designing, developing, deploying, implementing or using AI in the EU. Further, the authors identify the challenges associated with the implementation of the guidelines published in April, such as a need for clarification, a lack of regulatory oversight and the necessity to coordinate actions based on respective activities in member countries. The Think Tank also presents further EU actions ranging from legal guidance as well as standardisation in specific sectors or for particular applications. As a first step, they invite stakeholders to provide feedback by the end of 2019 on the practical implementation of the guideline requirements published in April. Lastly, yet importantly, the briefing gives an overview of ethical frameworks for AI in the United States and China.

The Think Tank briefing is not the only recent publication on AI regulation; the discussion surrounding AI regulation takes on international dimensions, but is increasingly dominated by richer countries. A recent study by ETH Zurich, Switzerland counted 84 groups suggesting ethical AI principles, most of them from Europe, the Unites States, and Japan, but none from the global South (Science Business reports). The findings hint at an imbalance and a lack of integration of different cultural backgrounds for AI treaties and regulations. Some countries and their attitudes towards AI may thus be simply left out, if they do not contribute their own suggestions.

The study published in Nature Machine Intelligence on 2 September shows that most guidelines are vague and contain no definite agreements. However, the Group of the 20 largest industrialised nations (G20) was at least able to make up their minds on a set of non-binding principles, and some governments now push for an international expert panel on AI analogous to the Intergovernmental Panel on Climate Change (IPCC). To make AI regulations more inclusive, the researchers conclude that the process of formulating them should be more bottom-up.

As a final step, the study managed to extract five general ideas for how AI should work. These are: transparency (understand how AI works); justice, fairness and equity; non-maleficence in the application of AI, responsibility and accountability (who takes the responsibility when something goes wrong) and, lastly, controls on privacy. However, the researchers also point out that, of the guidelines they analysed, very few mentioned sustainability as an important criterion. In the light of the UN Sustainable Development Goals (SDGs), this may have to change.