Tag: Disinformation

Online wars

  • August 2020
  • Ioanna Karagianni

Online wars

Disinformation and the need for unity

Source: The conversation

The widespread use of the Internet and of the online platforms has unavoidably created a whole new reality and has facilitated the free exchange of ideas and widespread information sharing, helping democracies to flourish. However, in the same time, this exact free and uncontrollable flow of information has created adverse phenomena like online disinformation and fake news, and, especially due to the current Covid-19 pandemic, these phenomena have exacerbated.


We see now that the new trend of massive disinformation is happening in a global context between authoritarianism and liberal democracy. For authoritarians, sharing information equals control and manipulation, whereas for democrats, free and open exchange of information is cherished and is critical for a fully functional democracy.


An online war has started; with the Internet as the battlefield and the information as the weapon. As this is a war, it is not only about the end goal (i.e. the content transmitted), but about the winner- who takes it all (i.e. the one who will have more control over the one who transmits the information). And then, it is also about who will lead (i.e. who will govern information), and this is the biggest fight between either allowing having trustworthy information which can save our democracies, or allowing authoritarians to set the rules and obscure democracies.


This reflects the agony of the states to prove who the best is. As the EU’s High Representative has described, we are currently experiencing ‘a global battle of narratives’. The pandemic has not only made the global powers contest about who has the best political and best healthcare system, but has also made them vie for winning the leader’s title using tactics such as news’ manipulation and online disinformation spreading (e.g. Russia and China have been torchbearers of this, seeking to aggressively manifest their superiority in handling Covid).


A prominent example of this strategy has been China’s response to Serbia’s appraisal for the former’s medical assistance to respond to Covid-19. The Serbian President called his Chinese counterpart a true friend of his people and even kissed the Chinese flag, accusing the EU of hypocrisy in matters of assistance. Targeting Western audiences, the Chinese state media immediately widely disseminated this, without any reference to the EU’s investment in the Serbian health sector. EU officials characterized this as ‘politics of generosity’ and have warned against Beijing’s propaganda campaigning multiple times.


Global phenomena call for multilateral action


In this fragile and contesting geopolitical stage, democratic allies need to stand together to protect their citizens and uphold democracy. The prime focus needs to be maintaining truth and transparency in messaging, thus it is vital to detect and analyze the lurking framework and the business models that enable this surge of disinformation. Even though online platforms tend to avoid explicit content moderation on the basis that they are not publishers, some moderation of extreme and polarizing content, such as white supremacist or fake news stories, is needed – consumers themselves have actually showed their desire for that.


A strong information strategy needs to be urgently adopted. This should encourage the dissemination of quality information from trusted sources to prevent the creation of information voids and be able to tackle adverse strategies take the lead. As privatizing content moderation without a meaningful transparency mechanism can lead to censorship, an active engagement of a broad range of stakeholders (e.g. journalists, civil society, academia, and politics) should regulate content distribution. Furthermore, science and reliable media sources need to be more strongly promoted to re-build trust both on a national and on EU level.


On the EU level solely, the Union needs to avoid being technocratic in its handlings and become more political, understanding in full its power and potential as a global actor. The new European Commission needs to stick to its plan regarding the Digital Services Act, while the EU Member States and the EU could, for instance, provide more funds for maintaining and verifying online trustworthy material (following the examples of the online encyclopedias in Norway and Latvia). The Centre for International Governance Innovation has proposed an additional interesting approach to the issue; instead of trying to build people’s online ‘immunity’ with broad-based educational initiatives, “inoculating key points (people) within a network is likely more effective and cheaper. Targeted engagement with individuals at the centre of networks (high network centrality scores, in social network analysis terms) could help promote immunity of the herd and reduce receptivity to fake content”. A lot can be done, but we need unity, strong support, and trust to the experts and to the EU.


This battle of narratives is, in essence, a battle for power: disinformation is nothing new, but the way information is weaponized today to create divisions and uncertainty is new and needs novel ways of handling. In the same time, unity of democratic states and their multilateral action are urgently needed to combat the abovementioned phenomena and to protect citizens while they are reaping the benefits of the online world. It’s high time we enjoyed the perks of the digital technologies safely, while we revitalize our democracies.

Share on social media:

Can’t trust this

  • March 2019
  • Otto Ilveskero

Can’t trust this


Common European action needed to challenge disinformation


Source: Pixabay


With under 80 days to go until the European Elections, disinformation and the integrity of electoral democracy seems to be on everyone’s lips. This week saw the Washington DC -based Atlantic Council bring its #DisinfoWeek conference to Brussels, while the Martens Centre and Antall József Knowledge Centre co-hosted an event on information security in Europe. On top of this, the European Commission is currently gearing up to its 2019 Media Literacy Week later this month, attempting to raise awareness and promote existing national initiatives before the elections in May. The problem has been identified and clearly documented, but how to respond to the challenge posed by malevolent disinformation campaigns remains an issue.


The Commission defines disinformation as ‘verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm’. Whether disinformation can be disseminated purely for political gain is left unanswered by the Commission. Currently, the EU’s proposed Action Plan against Disinformation consists of four sectors: strengthening information security capabilities, coordinating joint responses, mobilising private sector actors, and improving media literacy.


The unprecedented speed with which modern (dis-)information spreads creates a need for governments, civil society organisations, and companies to collaborate in an effort to improve the public’s resilience in the face of disinformation campaigns. In addition, the cross-border nature of digital information platforms also calls for a common European approach to this challenge, even though the organisation and protection of elections remain national competencies within the EU. And a response is certainly needed: according to a Eurobarometer survey published in March 2018, 83% of Europeans think that “fake news” are a threat to democracy. Therefore, it is crucial for the EU to strengthen its cybersecurity capacities and enable the sharing of best practice between member state authorities to respond to disinformation campaigns.


It is important, however, to understand that much of disinformation is not simply false information from shady sources as much as intentional misrepresentations of information from credible sources, as explained by Chloe Colliver from the Institute for Strategic Dialogues at the #DisinfoWeek Brussels 2019. The issue is highly complex, as often the information being shared may not be purposefully false but merely information framed to fit a specific problematic narrative. And to make matters more complicated, those who share this information are mostly domestic actors, ordinary citizens, rather than extremists seeking to cause societal harm. Thus, the questions over who and what to regulate become increasingly more difficult the better we understand the disinformation sphere.


Furthermore, not all disinformation is equally harmful or disruptive. Whereas some conspiracy theories (e.g. flat earth theorists) are harmless to the world around them, many other disinformation campaigns carry the potential to incite violence (e.g. malevolent anti-refugee narratives) or serious risks to public safety (e.g. anti-vaccination campaigns). Keeping this in mind, regulators can sidestep much of the accusations regarding silencing the freedom of speech (which is not being advocated here – curbing liberal values and the diversity inherent in democratic debate is what the malicious actors fuelling disinformation campaigns want) by regulating not offensive content, but content that is harmful to the safety and well-being of others.


Then we have the aims to improve media literacy and fact-checking systems. Often posited as alternatives to regulating social media platforms, the former can unfairly place an overwhelming majority of the responsibility on individuals in an increasingly complicated field, while the latter can be almost invisible in the world of social media algorithms that favour sensationalism and echo chambers. Fact-checking is not a viable option, when those within our digital communities reinforce opinions originally based on disinformation. Simply put, facts don’t change our minds. Media literacy, on the other hand, is absolutely crucial and must be robustly introduced in school curriculums across Europe. At the same time, however, improving media literacy is a long-term, expensive solution and requires the input of government regulation, NGO campaigns, and company responsibility to accompany its growing effect.


We cannot expect that a strengthened communication effort less than three months before the European Elections will adequately safeguard the quality of public debate and integrity of the electoral process, when the seeds of the trust-corroding anti-EU message have consistently been sowed over years. Raising awareness through fact-checking and media literacy through education are long-term projects which we must focus on constantly, rather than periodically in times of election campaigns. The current election-to-election focus risks losing sight of the long-term dangers posed by disinformation and can inhibit us from making the necessary, sustainable action plans.


All in all, the issue of disinformation is here to stay and requires long-term efforts to be adequately challenged. Elections provide a fruitful ground for fake news and intentionally misinterpreted content to flourish, but to most effectively tackle the dangers of disinformation, the EU must commit to improving public debate and safeguarding democracy also outside of the campaign season.

Share on social media:

EU vs. fake news

  • December 2018
  • Otto Ilveskero

EU vs. fake news

The Commission’s action plan to combat disinformation is not enough


Source: Nick Youngson | The Blue Diamond Gallery

Did you know that Poland is leaving the EU? Or that there are 20,000 armed migrants getting ready to attack the EU? According to one source, the EU is planning to turn Ukraine into a new Afghanistan for Russia.


This year alone, the European External Action Service’s (EEAS) EU vs Disinformation campaign has recorded (at the time of writing) 971 individual cases of pro-Kremlin disinformation. In total, the campaign has recorded 459 pages of disinformation cases on its website since January 2015. As defined by the European Commission, the term describes ‘verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public’. It is distinguished from propaganda in that it does not attempt to convince us into believing something but to deceive us into believing something by obscuring the truth through relentless promotion of falsehoods. Its function is to muddle our perception of truth and lie, fact and fiction. Disinformation lies at the heart of post-truth discourse.


In the 1980s, according to the (“failing” – yet another disinformation trope) New York Times’s brilliant Operation Infektion video series, the United States attempted to control and respond to Soviet disinformation with the Active Measures Working Group consisting of part-time employees. Meanwhile, the Soviet Union and its KGB-centred security apparatus employed around 15,000 people with a multimillion-dollar budget working on “active measures” (covert operations) – most significant of which was creating and spreading false information. Every KGB agent was reportedly required to spend 25% of their time inventing fake news. Through the years the Soviet-conceived conspiracies included such gems as the CIA killing John F. Kennedy, the US Government creating AIDS, and rich Americans buying poor children from Latin American to harvest their organs, all of which were spread to global news sources. And when the US “truth squad” was established in 1981, it lacked the budget and time to sufficiently challenge the impressive amount of dangerous nonsense churned in the cellars of Lubyanka.


And yet we have learned surprisingly little over the years. The Internet Research Agency (IRA), also known as the “troll factory”, is thought to employ around 800 people and conduct its operations from the Lakhta-2 business centre in North East St. Petersburg, where it is said to have moved much of its operations earlier this year from 55 Savushkina Street. The IRA runs its operations 24 hours a day, rotating workers in two 12-hour shifts. According to interviews with former employees, they are expected to post at least 50 comments on news articles a day, maintain six Facebook accounts publishing at least 3 times a day alongside discussing news in groups, and operate 10 Twitter accounts with a total of 50 tweets a day. In addition, websites such as the IRA-affiliated USA Really publish on average around 10 English-language articles a day. Then, on top of this, we have the more traditional pro-Kremlin news sources such as the RT TV network and Sputnik news agency, which are clearly separate from the IRA but nonetheless crucial elements of the fake news machine.


At a time when distributing (false) information has become easier than ever before, Western democracies continue to lag significantly behind in modern information operations and responses to disinformation. And we have all seen the division and mistrust online foreign influence operations can achieve in free elections. Recently, for example, disinformation operations have run rampant on the “gilets jaunes” demonstration in Paris and the Kerch strait confrontation between Ukraine and Russia.


Making its move, the European Commission presented its new 12-page disinformation action plan on Wednesday (5 December). Unveiled by Commissioners Andrus Ansip, Vera Jourová, Julian King, and Mariya Gabriel, the plan relies on four pillars: improved capabilities, enhanced reporting, private sector commitments, and societal resilience (e.g. media literacy). To strengthen these foundations, the Commissioners proposed a €3.1 million increase to the EEAS’s current €1.9 million allocation to tackle disinformation. Although even with the hefty bonus, the EU would be challenging false stories in 2019 with an allowance representing only around 0.5% of Russia’s estimated €1.1 billion pro-Kremlin media budget. Unsurprisingly, Mr Ansip admitted just a day later that the plan is not enough.


Alongside increased funding and calls for unified member state response, the plan relies strongly on the cooperation of social media companies. But the self-regulatory Code of Practice to tackle fake accounts, monitor disinformation, and make political advertising more transparent is simply not enough. The companies, such as Facebook or Twitter, have so far been slow at introducing new measures to tackle intentional distribution of disinformation and improve ad transparency. They have also been unwilling to share their data with independent fact-checkers and media experts to monitor and scrutinise fake news campaigns. In addition, the difficulty of distinguishing between misinformed free speech and corrupt disinformation operations with the intention to deceive has also made combatting online falsehoods harder. After all, the EU’s response to fake news cannot impinge on civil liberties and liberal values.


The time is running out for the 10-step action plan to be operational ahead of the 2019 European Elections, which the proposal indicates as its first test. European leaders will discuss the plan during next week’s European Council summit, and although the current plan is hopelessly inadequate for its Herculean task, it is the only one we have at the moment. It is time we recognise our own vulnerabilities.

Share on social media:

Walking a Fine Line

  • June 2018
  • Natalia Domingo

Walking a fine line

Combating disinformation and upholding freedom of speech in France

Nowadays as I scroll through my Facebook newsfeed on the tram to work, I can almost always expect to come across the most ridiculous (yet attention-grabbing) headlines, from “Outbreak of Nuclear War” to “Russian hackers used Tumblr to spread ‘fake news’ during US elections.” Even the comment section on the Facebook posts are an entertainment in themselves with the amount of ludicrous ideas people share on these platforms. Sometimes I’m tempted to waste that 30-minute ride reading through these comments out of curiosity of what people have to say these days. Other times, I know better than to do something that will trigger the desire to debate with them. The reality is that, 99% of the time, that debate will go absolutely nowhere, regardless of how wrong they are. Some people use the internet to exchange ideas with others in an attempt to enhance their knowledge. Other people use the internet close-mindedly, hoping to impose their ideas on whoever is willing to agree with them.


The growing use of technology for sharing and finding information by citizens has given it an important role as a forum for political debate. The internet has granted us the ability to quickly spread ideas with a mass population and it is reshaping the way in which citizens choose to participate in democratic societies. However, while the internet, and specifically social media, have allowed the various voices in society to be heard, it has also resulted in the spreading of disinformation. This is especially true during election campaigning season as persuading the public to believe one opinion over another is ever more crucial.


The fabrication of disinformation aimed at democratic society and the electoral system increased during the United States’ 2016 presidential election, when foreign actors created fake social media accounts to sway public opinion, and France’s 2017 presidential election, when rumors were fabricated about Macron possessing offshore bank accounts. As a result, President Emmanuel Macron has embarked upon legislation that will combat the spread of “fake news” during elections by issuing license suspensions for foreign-sponsored media outlets, requiring the disclosure of sponsors’ identities, and restricting the amount of money these sponsors can fund. Additionally, the new law would allow an individual to more easily refer cases to a judge.


While it is important for the public to have access to accurate information, it is also well known that, in the past, governments have used censorship to silence media outlets and journalists who took oppositional stances on officials. I’m sure we all remember Trump’s accusations of fake news by media companies, such as CNN, every time he was criticized. But citizens have a right to criticize their government and the ambiguity in how national legislation defines “fake news” poses a potential threat to freedom of speech and the freedom of the media. Therefore, Macron will unlikely be able to implement this sort of legislation without a little resistance.


In order to prosecute someone for spreading disinformation, it must be proven that the producer knowingly developed and shared inaccurate information with a malevolent intent. But allowing the government to decide whether someone is exercising their freedom of speech or attempting to disturb public peace is a tricky task and can be easily abused.  Even if a government can sufficiently draw the line between fake news and freedom of speech, the reality is that there is only so much the government can do to control the information a person is exposed to. Many producers of fake news are technologically clever and can find cracks in the system to avoid punishment for disinformation, such as publishing under anonymous accounts since the government cannot prosecute an individual whose identity is unknown.


Therefore, disinformation cannot be resolved with simply a legal solution, but rather requires an inclusive effort between governments, corporations, and citizens. Combining the efforts of these three major players can ensure diligence and enhance transparency. Majority of social media platforms hire staff who are tasked with reviewing content deemed inappropriate. These teams hold a responsibility to the public, similar to that of the government, for ensuring a safe online environment by filtering disinformation from plausible information. Their participation can also generate more accountability in censorship efforts because third-parties are assumed to be impartial to the government.


Although, we should not be so naïve to the fact that in some cases these third-parties may fail to fulfill this role, as recently seen with Facebook. Thus, we, the citizens, have the most significant role—while it is our right to express freely how we feel, it is also our duty to ensure we use these outlets in a way that is respectful of other’s opinions, as well as in the confines of law. Most importantly, we have a responsibility to ourselves to filter out plausible information from disinformation for the sake of our intellectual growth, which requires access to accurate information.


The European Commission’s Code of Practice to combat disinformation emphasizes media literacy, which can teach Europeans to assess information more critically and identify disinformation or unreliable news outlets. It seems media literacy is generally high among most internet-users, as a Reuters study found that people in France spent 10 million minutes per month on fake news websites, whereas 178 million minutes were spent on Le Monde—a more reliable news outlet. While the interactions with fake news and credible news on social media are less clear-cut, it is evident that people spend more time on the latter than the former.


Disinformation is definitely something to be wary of in a world where anyone can post anything their imagination can construct. However, President Trump’s excessive use of the term “fake news” has assumed a greater threat than is really there. Despite the increase of disinformation online, a majority of people still seem to be resorting to trustworthy sources. This means disinformation’s reach is less widespread than the government perceives and new legislation may be unnecessary. But that does not mean that we should scrap the effort as a whole. Some people remain vulnerable to disinformation, so media literacy should be an ongoing effort, rather than an area of concern primarily during election season. If France were to pursue more long-lasting efforts such as increased media literacy, rather than resorting to censorship laws, it could avoid walking the fine line of defining freedom of speech and disinformation.

Share on social media: