Following the disinformation outbreak that accompanied the COVID-19 pandemic, the EU takes action to prevent the spread of misleading content.
Disinformation and misinformation have been a rising concern for scientists and policy-makers in the last few years. However, the COVID-19 pandemic brought a new dimension to the harmful impact of dis- and misinformation, with the so-called infodemic that took place in Spring 2020. This term is used to describe the excess of information, including false and misleading information that accompanied the COVID-19 outbreak. To counterattack, the EU has deployed great efforts to understand and fight misinformation practices.
First, a little bit of vocabulary. Even if the two terms are often used interchangeably, misinformation and disinformation describe two different practices. Misinformation is false information that is spread without the intention to mislead. Such information spreads very rapidly and easily through social media, as they are often sensational and users can easily share them without checking their accuracy. Disinformation is misinformation that is intentionally created and spread with the goal to mislead. It spreads as quickly, if not more, than plain misinformation and can be very destructive if it targets a government for example. On top of that, authors of misleading content may use scientific results wrongly or deform them to fit their narrative, blurring the lines between facts and speculation. This especially happens in times of crisis like the COVID-19 pandemic, when the scientific community cannot speak with a unified voice as scientific facts are still lacking, creating uncertainty. Despite the difference between the two, practices preventing misinformation should also be efficient to tackle disinformation.
As a first step to address misinformation, the EU promotes research on the phenomenon with projects like Enlightenment 2.0 by the Joint Research Centre. Several associations like the EU Disinfo Lab also work on monitoring and understanding misleading practices. Three main approaches may help prevent the spread of misinformation: (i.) finding and punishing authors, (ii.) monitoring and taking down misleading content and (iii.) understanding human behaviours online and promoting digital skills accordingly. Obviously, the first approach is almost impossible to follow, as authors of disinformation rarely use their real name online and may be on the other side of the planet. Therefore, the efforts of the European Commission focus on the two other approaches.
In order to monitor and take down misleading content, the EC needs to involve digital platforms. To that end, the new Digital Services Act (DSA) includes obligations for the platforms to put in place measures combatting misinformation. Non-compliance would result in heavy fines. These measures should be based on the Code of Practice on disinformation, which proposes self-regulatory standards that several major platforms like Facebook, Twitter and TikTok signed. However, several stakeholders criticised the obligations within the DSA as not being sufficient to make a significant difference. Another key issue for the moderation of content is the ambivalent role of AI recommendation systems. Indeed, currently most recommendation algorithms are designed to keep users online and generate clicks. Misinformation contents are thus most likely to be spread by these algorithms, as they generate both attention and clicks. However, algorithms are essential to track harmful content, because human solutions for content moderation cannot scale. Transparency of AI algorithms is therefore essential to guarantee adequate monitoring. Obligations to providers and users of AI systems in the AI Act may help ensure the quality of recommendation algorithms, if such AI systems fall under a high or medium risk category according to the proposed regulation.
Tackling misinformation also comes down to understanding human behaviour online and educating users with appropriate skills accordingly. Online cognitive behaviours are inherently different from offline ones. This means that digital education is crucial in order to teach how to detect misinformation, how to verify an information and how to differentiate facts from speculations. For that, the Digital Education Action Plan (DEAP) identified digital literacy as one of the most important digital skills to have in the 21st century. As part of the DEAP’s actions, an ‘Expert Group on Tackling Disinformation and Promoting Digital Literacy through Education and Training’ will support the creation of comprehensive guidelines for teachers and educators on and the promotion of digital literacy and tackling disinformation in schools and training centres.
The main tasks of the expert group will be to assist the EC in the preparation of legislative proposals and policy initiatives, in building the definition of digital literacy, in the collection of resources on existing recommendations and in developing ways to improve didactic knowledge on disinformation.
With these various initiatives, the EC hopes to diminish the proliferation of misleading content online, while respecting freedom of speech and opinion. Another objective is to avoid future infodemics in times of crisis and promote greater digital literacy overall in the European population.