Loading...

Tag: Fake News

Online wars

  • August 2020
  • Ioanna Karagianni

Online wars

Disinformation and the need for unity

Source: The conversation

The widespread use of the Internet and of the online platforms has unavoidably created a whole new reality and has facilitated the free exchange of ideas and widespread information sharing, helping democracies to flourish. However, in the same time, this exact free and uncontrollable flow of information has created adverse phenomena like online disinformation and fake news, and, especially due to the current Covid-19 pandemic, these phenomena have exacerbated.

 

We see now that the new trend of massive disinformation is happening in a global context between authoritarianism and liberal democracy. For authoritarians, sharing information equals control and manipulation, whereas for democrats, free and open exchange of information is cherished and is critical for a fully functional democracy.

 

An online war has started; with the Internet as the battlefield and the information as the weapon. As this is a war, it is not only about the end goal (i.e. the content transmitted), but about the winner- who takes it all (i.e. the one who will have more control over the one who transmits the information). And then, it is also about who will lead (i.e. who will govern information), and this is the biggest fight between either allowing having trustworthy information which can save our democracies, or allowing authoritarians to set the rules and obscure democracies.

 

This reflects the agony of the states to prove who the best is. As the EU’s High Representative has described, we are currently experiencing ‘a global battle of narratives’. The pandemic has not only made the global powers contest about who has the best political and best healthcare system, but has also made them vie for winning the leader’s title using tactics such as news’ manipulation and online disinformation spreading (e.g. Russia and China have been torchbearers of this, seeking to aggressively manifest their superiority in handling Covid).

 

A prominent example of this strategy has been China’s response to Serbia’s appraisal for the former’s medical assistance to respond to Covid-19. The Serbian President called his Chinese counterpart a true friend of his people and even kissed the Chinese flag, accusing the EU of hypocrisy in matters of assistance. Targeting Western audiences, the Chinese state media immediately widely disseminated this, without any reference to the EU’s investment in the Serbian health sector. EU officials characterized this as ‘politics of generosity’ and have warned against Beijing’s propaganda campaigning multiple times.

 

Global phenomena call for multilateral action

 

In this fragile and contesting geopolitical stage, democratic allies need to stand together to protect their citizens and uphold democracy. The prime focus needs to be maintaining truth and transparency in messaging, thus it is vital to detect and analyze the lurking framework and the business models that enable this surge of disinformation. Even though online platforms tend to avoid explicit content moderation on the basis that they are not publishers, some moderation of extreme and polarizing content, such as white supremacist or fake news stories, is needed – consumers themselves have actually showed their desire for that.

 

A strong information strategy needs to be urgently adopted. This should encourage the dissemination of quality information from trusted sources to prevent the creation of information voids and be able to tackle adverse strategies take the lead. As privatizing content moderation without a meaningful transparency mechanism can lead to censorship, an active engagement of a broad range of stakeholders (e.g. journalists, civil society, academia, and politics) should regulate content distribution. Furthermore, science and reliable media sources need to be more strongly promoted to re-build trust both on a national and on EU level.

 

On the EU level solely, the Union needs to avoid being technocratic in its handlings and become more political, understanding in full its power and potential as a global actor. The new European Commission needs to stick to its plan regarding the Digital Services Act, while the EU Member States and the EU could, for instance, provide more funds for maintaining and verifying online trustworthy material (following the examples of the online encyclopedias in Norway and Latvia). The Centre for International Governance Innovation has proposed an additional interesting approach to the issue; instead of trying to build people’s online ‘immunity’ with broad-based educational initiatives, “inoculating key points (people) within a network is likely more effective and cheaper. Targeted engagement with individuals at the centre of networks (high network centrality scores, in social network analysis terms) could help promote immunity of the herd and reduce receptivity to fake content”. A lot can be done, but we need unity, strong support, and trust to the experts and to the EU.

 

This battle of narratives is, in essence, a battle for power: disinformation is nothing new, but the way information is weaponized today to create divisions and uncertainty is new and needs novel ways of handling. In the same time, unity of democratic states and their multilateral action are urgently needed to combat the abovementioned phenomena and to protect citizens while they are reaping the benefits of the online world. It’s high time we enjoyed the perks of the digital technologies safely, while we revitalize our democracies.

Share on social media:

Create, Connect, Engage: Digital Campaigning & Cybersecurity

  • May 2019
  • Daniela Floris

Create, Connect, Engage: Digital Campaigning & Cybersecurity

More than ever before, social media and digital tools will impact the 2019 EP elections.

David Timis, on the left, and William Echikson, on the right, with a participant. Photo: Joseph Cochran

 

As the European Parliamentary Elections are approaching, candidates and political strategists have less than a month left to engage with citizens, mobilize their base and appeal to swing voters.Internet hacks and disinformation represent concrete threats to electoral runs-up, with “fake news” seen as a problem for democracy by 83% of Europeans.

In fact, according to the Parliament’s latest Eurobarometer survey, one third of Europeans of voting age are are reported not planning to vote, believing that their vote “won’t change anything”. Much of Institutional and political communication tend to be run on social media, but turning online interaction into active participation remains challenging. Initiatives to mobilize voters, such as #ThisTimeImVoting and #EUandMe, launched by the European Parliament to present information on the election process and to promote the achievements of the EU, have been received positively by users and more traditional media. However, we will have to wait until ballots are cast for an impact assessment on the voter turnout, which was only 42,6% in 2014.

How to run a successful digital election campaign was the subject of a training session organized by the Centre for European Policy Studies (CEPS), a Brussels-based think tank for EU affairs. The main speaker was David Timis, Google’s EU civic outreach fellow, and co-founder of European Heroes, a platform for the civic engagement of young Europeans. The event was moderated by William Echikson, CEPS’ Head of Digital Forum, Europe’s correspondent for the Wall Street Journal for decades, and former Senior Manager at Google. During the workshop, Timis shared a few but precious tips on how to establish a digital audience and counter cyberattacks.In fact, according to Timis, effective digital media campaigns are able to build a strong brand with clear-cut messages, an attractive storytelling of the actors involved and a straightforward mission statement. The goal is not only to attract views, but to connect with potential voters or activists and, ultimately, call for action.

 

Mobile Friendliness

Surfing the internet while on the move is a consolidated habit for the most of us. A Eurostat poll show that 65% of Europeans aged 16-75 use mobile devices to access the internet, proving that TV is no longer the main medium for audiovisual material. YouTube, the video streaming platform acquired by Google in 2006 for USD 1.65 billion, has over 1.8 billion monthly users . Streaming and sharing videos have become increasingly popular, with extensions like “live” and “stories” incorporated in various social networking platforms. Establishing a YouTube channel is now recommended for businesses and organizations, especially if they wish to attract a young viewership.

Contrary to popular belief, Timis pointed out, producing viral videos does not necessarily require a big budget: filming with a smartphone gives the public a glimpse of spontaneity and casualness. Accordingly, fast paced storylines and visual close-ups mimic real time human interaction, instilling audiences with the perception of being participants rather than just viewers. Successful campaigns, such as the one of Alexandria Ocasio-Cortez, now a Member of the House of Representatives in the US, prove that a multimillion-dollar funding is not indispensable: support from a tech-savvy team with a sharp social media strategy can lead to outstanding results. Charisma, creativity and a deep understanding of your target of reference all pay back in terms of popularity.

Other formats, including “behind the scenes” content, videos featuring supporters or volunteers and being endorsed by influencers are also important in the communication strategy.

 

Cybersecurity

As the distribution of media has become more and more related to the internet, the likelihood of sabotage through cyberattacks has increased. Political Campaigns are particularly exposed to such threats, with the Brexit referendum and the 2016 U.S Presidential Election being the most notorious cases. Attempts of hacking and “phishing” (the fraudulent attempt to steal information posing as a trustworthy entity) have become the most common attacks, but, according to Timis, they can be prevented.. The Cybersecurity Campaign Playbook , published by the Harvard Kennedy School, and quoted by Timis, suggests that raising awareness is paramount to counter malicious attacks. Simple measures, such as using long and different passwords for different accounts, storing sensitive information in web clouds and communicating through encrypted apps could make the difference. Other risk management tools include using a password manager and two-factor authentication (2FA), in which the second step of log-in relies on dedicated apps, safer than text messaging, or on security keys. Several of these tools are free or low-cost, making media literacy a fundamental asset for campaigners and social media managers to disallow data leaks and privacy breaching.

 

A digital future

When asked what the next big thing for the digitalization of Europe was going to be, Timis had no doubt: “They are already here: mobile and video have taken over the way we communicate and connect to each other, bringing parties, organizations and movements closer to the people they interact with.”, William Echikson, on the other hand, in virtue of his decades-long experience in reporting on EU affairs and policy research was more cautious: “for 30 years the Internet has had a sort of free pass. Recent scandals have shown that a more organic approach to regulations and policymaking needs to be implemented. It is a long way to go and it won’t be easy, but we are getting there”.

 

 

Share on social media:

Can’t trust this

  • March 2019
  • Otto Ilveskero

Can’t trust this

 

Common European action needed to challenge disinformation

 

Source: Pixabay

 

With under 80 days to go until the European Elections, disinformation and the integrity of electoral democracy seems to be on everyone’s lips. This week saw the Washington DC -based Atlantic Council bring its #DisinfoWeek conference to Brussels, while the Martens Centre and Antall József Knowledge Centre co-hosted an event on information security in Europe. On top of this, the European Commission is currently gearing up to its 2019 Media Literacy Week later this month, attempting to raise awareness and promote existing national initiatives before the elections in May. The problem has been identified and clearly documented, but how to respond to the challenge posed by malevolent disinformation campaigns remains an issue.

 

The Commission defines disinformation as ‘verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm’. Whether disinformation can be disseminated purely for political gain is left unanswered by the Commission. Currently, the EU’s proposed Action Plan against Disinformation consists of four sectors: strengthening information security capabilities, coordinating joint responses, mobilising private sector actors, and improving media literacy.

 

The unprecedented speed with which modern (dis-)information spreads creates a need for governments, civil society organisations, and companies to collaborate in an effort to improve the public’s resilience in the face of disinformation campaigns. In addition, the cross-border nature of digital information platforms also calls for a common European approach to this challenge, even though the organisation and protection of elections remain national competencies within the EU. And a response is certainly needed: according to a Eurobarometer survey published in March 2018, 83% of Europeans think that “fake news” are a threat to democracy. Therefore, it is crucial for the EU to strengthen its cybersecurity capacities and enable the sharing of best practice between member state authorities to respond to disinformation campaigns.

 

It is important, however, to understand that much of disinformation is not simply false information from shady sources as much as intentional misrepresentations of information from credible sources, as explained by Chloe Colliver from the Institute for Strategic Dialogues at the #DisinfoWeek Brussels 2019. The issue is highly complex, as often the information being shared may not be purposefully false but merely information framed to fit a specific problematic narrative. And to make matters more complicated, those who share this information are mostly domestic actors, ordinary citizens, rather than extremists seeking to cause societal harm. Thus, the questions over who and what to regulate become increasingly more difficult the better we understand the disinformation sphere.

 

Furthermore, not all disinformation is equally harmful or disruptive. Whereas some conspiracy theories (e.g. flat earth theorists) are harmless to the world around them, many other disinformation campaigns carry the potential to incite violence (e.g. malevolent anti-refugee narratives) or serious risks to public safety (e.g. anti-vaccination campaigns). Keeping this in mind, regulators can sidestep much of the accusations regarding silencing the freedom of speech (which is not being advocated here – curbing liberal values and the diversity inherent in democratic debate is what the malicious actors fuelling disinformation campaigns want) by regulating not offensive content, but content that is harmful to the safety and well-being of others.

 

Then we have the aims to improve media literacy and fact-checking systems. Often posited as alternatives to regulating social media platforms, the former can unfairly place an overwhelming majority of the responsibility on individuals in an increasingly complicated field, while the latter can be almost invisible in the world of social media algorithms that favour sensationalism and echo chambers. Fact-checking is not a viable option, when those within our digital communities reinforce opinions originally based on disinformation. Simply put, facts don’t change our minds. Media literacy, on the other hand, is absolutely crucial and must be robustly introduced in school curriculums across Europe. At the same time, however, improving media literacy is a long-term, expensive solution and requires the input of government regulation, NGO campaigns, and company responsibility to accompany its growing effect.

 

We cannot expect that a strengthened communication effort less than three months before the European Elections will adequately safeguard the quality of public debate and integrity of the electoral process, when the seeds of the trust-corroding anti-EU message have consistently been sowed over years. Raising awareness through fact-checking and media literacy through education are long-term projects which we must focus on constantly, rather than periodically in times of election campaigns. The current election-to-election focus risks losing sight of the long-term dangers posed by disinformation and can inhibit us from making the necessary, sustainable action plans.

 

All in all, the issue of disinformation is here to stay and requires long-term efforts to be adequately challenged. Elections provide a fruitful ground for fake news and intentionally misinterpreted content to flourish, but to most effectively tackle the dangers of disinformation, the EU must commit to improving public debate and safeguarding democracy also outside of the campaign season.

Share on social media:

EU vs. fake news

  • December 2018
  • Otto Ilveskero

EU vs. fake news

The Commission’s action plan to combat disinformation is not enough

 

Source: Nick Youngson | The Blue Diamond Gallery

Did you know that Poland is leaving the EU? Or that there are 20,000 armed migrants getting ready to attack the EU? According to one source, the EU is planning to turn Ukraine into a new Afghanistan for Russia.

 

This year alone, the European External Action Service’s (EEAS) EU vs Disinformation campaign has recorded (at the time of writing) 971 individual cases of pro-Kremlin disinformation. In total, the campaign has recorded 459 pages of disinformation cases on its website since January 2015. As defined by the European Commission, the term describes ‘verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public’. It is distinguished from propaganda in that it does not attempt to convince us into believing something but to deceive us into believing something by obscuring the truth through relentless promotion of falsehoods. Its function is to muddle our perception of truth and lie, fact and fiction. Disinformation lies at the heart of post-truth discourse.

 

In the 1980s, according to the (“failing” – yet another disinformation trope) New York Times’s brilliant Operation Infektion video series, the United States attempted to control and respond to Soviet disinformation with the Active Measures Working Group consisting of part-time employees. Meanwhile, the Soviet Union and its KGB-centred security apparatus employed around 15,000 people with a multimillion-dollar budget working on “active measures” (covert operations) – most significant of which was creating and spreading false information. Every KGB agent was reportedly required to spend 25% of their time inventing fake news. Through the years the Soviet-conceived conspiracies included such gems as the CIA killing John F. Kennedy, the US Government creating AIDS, and rich Americans buying poor children from Latin American to harvest their organs, all of which were spread to global news sources. And when the US “truth squad” was established in 1981, it lacked the budget and time to sufficiently challenge the impressive amount of dangerous nonsense churned in the cellars of Lubyanka.

 

And yet we have learned surprisingly little over the years. The Internet Research Agency (IRA), also known as the “troll factory”, is thought to employ around 800 people and conduct its operations from the Lakhta-2 business centre in North East St. Petersburg, where it is said to have moved much of its operations earlier this year from 55 Savushkina Street. The IRA runs its operations 24 hours a day, rotating workers in two 12-hour shifts. According to interviews with former employees, they are expected to post at least 50 comments on news articles a day, maintain six Facebook accounts publishing at least 3 times a day alongside discussing news in groups, and operate 10 Twitter accounts with a total of 50 tweets a day. In addition, websites such as the IRA-affiliated USA Really publish on average around 10 English-language articles a day. Then, on top of this, we have the more traditional pro-Kremlin news sources such as the RT TV network and Sputnik news agency, which are clearly separate from the IRA but nonetheless crucial elements of the fake news machine.

 

At a time when distributing (false) information has become easier than ever before, Western democracies continue to lag significantly behind in modern information operations and responses to disinformation. And we have all seen the division and mistrust online foreign influence operations can achieve in free elections. Recently, for example, disinformation operations have run rampant on the “gilets jaunes” demonstration in Paris and the Kerch strait confrontation between Ukraine and Russia.

 

Making its move, the European Commission presented its new 12-page disinformation action plan on Wednesday (5 December). Unveiled by Commissioners Andrus Ansip, Vera Jourová, Julian King, and Mariya Gabriel, the plan relies on four pillars: improved capabilities, enhanced reporting, private sector commitments, and societal resilience (e.g. media literacy). To strengthen these foundations, the Commissioners proposed a €3.1 million increase to the EEAS’s current €1.9 million allocation to tackle disinformation. Although even with the hefty bonus, the EU would be challenging false stories in 2019 with an allowance representing only around 0.5% of Russia’s estimated €1.1 billion pro-Kremlin media budget. Unsurprisingly, Mr Ansip admitted just a day later that the plan is not enough.

 

Alongside increased funding and calls for unified member state response, the plan relies strongly on the cooperation of social media companies. But the self-regulatory Code of Practice to tackle fake accounts, monitor disinformation, and make political advertising more transparent is simply not enough. The companies, such as Facebook or Twitter, have so far been slow at introducing new measures to tackle intentional distribution of disinformation and improve ad transparency. They have also been unwilling to share their data with independent fact-checkers and media experts to monitor and scrutinise fake news campaigns. In addition, the difficulty of distinguishing between misinformed free speech and corrupt disinformation operations with the intention to deceive has also made combatting online falsehoods harder. After all, the EU’s response to fake news cannot impinge on civil liberties and liberal values.

 

The time is running out for the 10-step action plan to be operational ahead of the 2019 European Elections, which the proposal indicates as its first test. European leaders will discuss the plan during next week’s European Council summit, and although the current plan is hopelessly inadequate for its Herculean task, it is the only one we have at the moment. It is time we recognise our own vulnerabilities.

Share on social media:
Next >>
Close
loading...