Can’t trust this
Common European action needed to challenge disinformation
With under 80 days to go until the European Elections, disinformation and the integrity of electoral democracy seems to be on everyone’s lips. This week saw the Washington DC -based Atlantic Council bring its #DisinfoWeek conference to Brussels, while the Martens Centre and Antall József Knowledge Centre co-hosted an event on information security in Europe. On top of this, the European Commission is currently gearing up to its 2019 Media Literacy Week later this month, attempting to raise awareness and promote existing national initiatives before the elections in May. The problem has been identified and clearly documented, but how to respond to the challenge posed by malevolent disinformation campaigns remains an issue.
The Commission defines disinformation as ‘verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm’. Whether disinformation can be disseminated purely for political gain is left unanswered by the Commission. Currently, the EU’s proposed Action Plan against Disinformation consists of four sectors: strengthening information security capabilities, coordinating joint responses, mobilising private sector actors, and improving media literacy.
The unprecedented speed with which modern (dis-)information spreads creates a need for governments, civil society organisations, and companies to collaborate in an effort to improve the public’s resilience in the face of disinformation campaigns. In addition, the cross-border nature of digital information platforms also calls for a common European approach to this challenge, even though the organisation and protection of elections remain national competencies within the EU. And a response is certainly needed: according to a Eurobarometer survey published in March 2018, 83% of Europeans think that “fake news” are a threat to democracy. Therefore, it is crucial for the EU to strengthen its cybersecurity capacities and enable the sharing of best practice between member state authorities to respond to disinformation campaigns.
It is important, however, to understand that much of disinformation is not simply false information from shady sources as much as intentional misrepresentations of information from credible sources, as explained by Chloe Colliver from the Institute for Strategic Dialogues at the #DisinfoWeek Brussels 2019. The issue is highly complex, as often the information being shared may not be purposefully false but merely information framed to fit a specific problematic narrative. And to make matters more complicated, those who share this information are mostly domestic actors, ordinary citizens, rather than extremists seeking to cause societal harm. Thus, the questions over who and what to regulate become increasingly more difficult the better we understand the disinformation sphere.
Furthermore, not all disinformation is equally harmful or disruptive. Whereas some conspiracy theories (e.g. flat earth theorists) are harmless to the world around them, many other disinformation campaigns carry the potential to incite violence (e.g. malevolent anti-refugee narratives) or serious risks to public safety (e.g. anti-vaccination campaigns). Keeping this in mind, regulators can sidestep much of the accusations regarding silencing the freedom of speech (which is not being advocated here – curbing liberal values and the diversity inherent in democratic debate is what the malicious actors fuelling disinformation campaigns want) by regulating not offensive content, but content that is harmful to the safety and well-being of others.
Then we have the aims to improve media literacy and fact-checking systems. Often posited as alternatives to regulating social media platforms, the former can unfairly place an overwhelming majority of the responsibility on individuals in an increasingly complicated field, while the latter can be almost invisible in the world of social media algorithms that favour sensationalism and echo chambers. Fact-checking is not a viable option, when those within our digital communities reinforce opinions originally based on disinformation. Simply put, facts don’t change our minds. Media literacy, on the other hand, is absolutely crucial and must be robustly introduced in school curriculums across Europe. At the same time, however, improving media literacy is a long-term, expensive solution and requires the input of government regulation, NGO campaigns, and company responsibility to accompany its growing effect.
We cannot expect that a strengthened communication effort less than three months before the European Elections will adequately safeguard the quality of public debate and integrity of the electoral process, when the seeds of the trust-corroding anti-EU message have consistently been sowed over years. Raising awareness through fact-checking and media literacy through education are long-term projects which we must focus on constantly, rather than periodically in times of election campaigns. The current election-to-election focus risks losing sight of the long-term dangers posed by disinformation and can inhibit us from making the necessary, sustainable action plans.
All in all, the issue of disinformation is here to stay and requires long-term efforts to be adequately challenged. Elections provide a fruitful ground for fake news and intentionally misinterpreted content to flourish, but to most effectively tackle the dangers of disinformation, the EU must commit to improving public debate and safeguarding democracy also outside of the campaign season.