Skip to main content

Tackling the infodemic threatening modern democracies

By 10 October 2024No Comments

“Fake news spreads faster and more easily than the virus, and is just as dangerous.

This sentence, spoken in 2020 during the COVID-19 pandemic by the Director-General of the World Health Organization (WHO)1, still remains relevant today. It can apply to the Mpox epidemic (declared a public health emergency of international concern), which has become a target of fake news widely spread online.

While the inherent dangers of an infodemic2 are particularly highlighted during times of crisis, these concerns tend to fade when the crisis subsides. The spread of health-related fake news occurs every day on social media platforms, whose very structure promotes the visibility of harmful content.

FROM HEALTH MISINFORMATION TO DEMOCRATIC THREAT

The spread of health-related fake news is significantly amplified in the digital era. Although it is not a new phenomenon, it is now estimated that fake news spreads six times faster than reliable information3.

The internet and social media has drastically altered our relationship with information, creating what is now referred to as the “contemporary information chaos”4. The impact of harmful information can extend beyond the virtual realm: what is problematic online has the potential to become extremely damaging offline (as illustrated by the U.S. Capitol attack in January 2021, which followed the massive spread of fake news on social networks).

Health-related fake news can have consequences not only for individual health or public health (e.g., the Wakefield case), but also for democracies’ health.

Our democracies in particular have immensely suffered during the COVID-19 crisis from the spread of fake news, with scientific discourse and government health decisions being called into question and providing the fodder for many online debates. Virtual hostility translated into real-world disturbance, with vaccination centers being vandalized (here and there), and “violent actions, (…) planned by members of a far-right conspiracy movement against the health minister (…)”5 . As exposed by the Parliamentary Office for the Evaluation of Scientific and Technological Choices, the infodemic is “responsible for increased public distrust and a weakening of social cohesion6. Disregarding the legitimacy of state institutions and their decisions is a particularly tangible danger to democracy, heightened during health crises. 

However the end of a health crisis does not mean the end of fake news, nor the disappearance of its creators or its spreading power. A report7 shows for example that conspiracist and extremist circles that were active during the COVID-19 pandemic remain active on X (formerly Twitter) among climate denialists. The spread of health-related fake news continues, fueling various conspiracy theories (the most popular one being health-related8) that will regain influence during the next crisis – which is why we must protect our society against online health-related fake news every day, without waiting for the next severe infodemic to occur.

The dual challenge brought about health-related fake news – protecting both individuals’ and democracies’ health highlights the urgency of addressing the issue by effectively implementing regulation before the next health crisis.

HOW TO TACKLE THE ROOT OF THE PROBLEM: THE TOXIC DYNAMICS OF SOCIAL MEDIA

The fight against fake news is becoming increasingly complex: beyond the legal considerations related to freedom of expression, we are facing major challenges due to the business model of social media platforms.

Health-related fake news spreads rapidly online due to the very architecture of social media platforms: their business model relies on promoting controversial, hostile, shocking, and toxic content9, which includes fake news. Social media algorithms are the arbitrator of visibility.

Platform algorithms are designed to promote the most profitable content, meaning those that generate the most reactions (clicks, shares, likes, comments); these are often toxic content because they tap into our emotions. However, according to some experts, it is this very content that weakens democracy. David Chavalarias, director of the Institute for Complex Systems, stated that “we must choose between the current business model of platforms and democracy10” , adding that “modifying a single line of code on the central server of a private company could literally change the lives of billions of people.”

Social media platforms are not solely responsible for the problem, but their economic model exacerbates the spread of fake news (and consequently, its impact) to such an extent that initiatives like fact-checking or educational content dissemination are insufficient to erase the harmful effects of fake news. The rise of far-right ideologies adds urgency to the problem as their supporters constitute the main source of fake news circulation. Regulation is, de facto, more necessary than ever. It is in this context that the European Union has undertaken work to hold platforms accountable with regulations such as the Digital Services Act (DSA).

REGULATION VERSUS FAKE NEWS

Several texts in the French legal framework can serve as a legal basis for sanctioning the spread of fake news. For example, the penal code condemns the dissemination of false alarms11, while the press freedom law punishes the publication of false news12, and defamation13. The recent “anti fake news” law applicable during French election periods, sanctions misinformation likely to impact elections, explicitly mentioning online content (a clarification not present in the aforementioned provisions). There are also health-related provisions established by the law on cults14, or certain articles of the public health code15.

However, these legal provisions face several challenges that make its enforcement challenging. The plurality of sources, the conditions required for the legal qualification of the offense, finding the balance between freedom of expression (which includes the freedom to lie16) and the protection of public order, and the particular nature of online media all contribute to these difficulties.

To tackle online fake news effectively, defining what constitutes punishable content is insufficient. The structure of the platforms where fake news spreads must also be regulated: it is the very dynamics of online platforms that exacerbate the dissemination of harmful content, which must therefore be regulated.

This is the ambition of the Digital Service Act (DSA), a European regulation that holds platforms accountable for their services and the experience they provide to online users. Facilitating reporting, complaint mechanisms, transparency requirements for moderation tools, limiting targeted advertising, and allowing users to configure their recommendation systems are all necessary requirements for a healthy online environment. Through these various provisions, the DSA requires online platforms to improve their dynamics and pushes for more effective moderation concerning all types of harmful content (harassment, degrading comments, discriminatory statements, or misinformation) on any topic.

The challenge now is to ensure that these internet giants comply with the requirements set by the regulations and that their business model no longer promotes harmful content. The European Commission has exclusive competence in monitoring very large platforms (VLOPs) such as X, Facebook, Instagram, or TikTok. The role of national digital service coordinators (ARCOM in France) is therefore more limited but no less important. It is absolutely essential that ARCOM be sufficiently equipped to monitor the national platforms under its responsibility and to collaborate effectively with the Commission regarding VLOPs, and the production of codes of conduct or crisis protocols that reinforce the DSA’s requirements. France must thus allocate the necessary funding, technical capacities, and powers to this independent public authority so that it can establish itself as a strong force in the fight against fake news spreading and, more broadly, the regulation of online platforms.

Global Health Advocates strongly encourages making health a cornerstone in the fight against online disinformation. Tackling disinformation must become a top priority of public health policies and it is essential for public authorities to assert their authority in regulating online platforms in order to protect the interests of every citizen.


1 Munich Security Conference, February 15, 2020, full speech available here

2 Overabundance of information, some of which may be misleading or even harmful (WHO press release, “Working together to tackle the infodemic,” June 29, 2020, available here)

Salma Benchekroun. Fake news in the health field in the digital age. Pharmaceutical Sciences. 2021. dumas-03426283 cites The spread of true and false news online | Science [Internet]. [cited Jan 4, 2021]. (in French)

4 Report of the Commission on Enlightenment in the Digital Age, January 2022, available here (in French)

5 Report of the Commission on Enlightenment in the Digital Age, January 2022, available here (in French)

6 Report of the Parliamentary Office for the Evaluation of Scientific and Technological Choices, on the side effects of vaccines and the latest developments in scientific knowledge on COVID-19, May 30, 2024, available here (in French)

7 David Chavalarias, Paul Bouchaud, Victor Chomel, Maziyar Panahi. The new fronts of denialism and climate skepticism: Two years of Twitter exchanges under the macroscopes. 2023. ⟨hal-03986798v2⟩ (In French)

43% of French people believe that the Ministry of Health is colluding with the pharmaceutical industry to hide the reality of the harmfulness of vaccines from the public, Salma Benchekroun. Fake news in the health field in the digital age. Pharmaceutical Sciences. 2021. dumas-03426283 (in French)

Munn, L. Angry by design: toxic communication and technical architectures. Humanit Soc Sci Commun 7, 53 (2020). https://doi.org/10.1057/s41599-020-00550-7 / Bouchaud, P., Chavalarias, D. & Panahi, M. Crowdsourced audit of Twitter’s recommender systems. Sci Rep 13, 16815 (2023). https://doi.org/10.1038/s41598-023-43980-4

10 David Chavalarias (ISCP-IDFP),Roundtable on foreign influences in the digital space, Inquiry Commission on public policies in response to foreign influence operations, June 4, 2024 (in French)

11 Article 322-14 of the Penal Code (In French)

12 Article 27, Law of July 29, 1881 on press freedom (In French)

13 Article 29, Law of July 29, 1881 on press freedom (In French)

14 Incitement to forego/abstain from medical care, encouragement to adopt risky health practices, illegal practice of medicine or misleading commercial practices committed via the internet

15 Articles R4127-13, R4127-14, R4127-31 and R4127-39 of the Public Health Code (In French)

16 Cour de Cassation, Civ. 1st, April 10, 2013, ruling no. 12-10177