Loading...

Let’s Tweet Again – Fake News and Social Media

Let’s Tweet Again – Fake News and Social Media

Event Report

05/02/2018

 

 

The discussion was opened by defining fake news and how things should be done regarding the fake news. It is important to distinguish different types of fake news. First of all, we have satire or parody, whose aim is not to provide harm, but to satirize real life events. Then there are false connections, that do not support the content. After that, we have the misleading content, which frames something or an individual. Also, we have false context, imposter content, manipulative content (Photoshop example) and there is the fabricated content, where 100 percent is false. False information is spread with the intention to create confusion and artificial emotional response, with a political or economic interest. The unregulated environment helped them to foster this.

Press publishers have been caught unprepared with the transition from analogue to digital. The European Commission has acknowledged that, and proposed reduced VAT for e-publications. The cost for producing quality news is much higher than producing fake news. The efforts that social media are starting to make to tackle misinformation should be further incentives by the European legislator. The future legal landscape should be formed to support quality content and to prevent additional burdens to the market. Critical thinking needs to be developed as it is the cornerstone of democracy. Ideally, tackling fake news by human intervention should be a part of corporate responsibility of online platforms. Considering the resources that companies have, much more has to be done.

After that, the discussion moved on to differentiate the term fake news from disinformation, especially politically, as the concept of fake news is often misused. Disinformation is defined as a deliberate spread of false information. The debate then moved on to Russia, whose breakpoint was the war in Georgia in 2008, when Russia started the disinformation campaign through channels such as Sputnik and RT. They are very lucid in tailoring the messages to their targeted audience in their spheres of influence. The Lisa case in Germany was an excellent example of disinformation, as it was a made-up case of refugees raping a young Russian-speaking girl. A similar example was made-up in Lithuania as well, but fact-checkers managed to react quickly.

The discussion then moved on to the social media, and specifically practices and ways of tackling bots and fake news on Twitter and Facebook, especially Twitter, due to its openness. French elections were pointed out in particular, where fake accounts with a Russian background were endorsing certain French presidential candidates, and were at the same time following RT and Sputnik. A famous example is the Macron Leaks, which happened 3 hours before the closing of the French presidential campaign, where an apparently huge scandal was produced on Twitter that Macron is funded by Saudi Arabia, which was made-up with reportedly leaked information and emails. It all originated from a US based alt-right activist.

 

 

The focus of the discussion then moved to Italy, which is holding parliamentary elections on March 4th 2018. There is not a strong Russian influence in Italy, which lacks a fact-checking culture. There is currently a rise in Euroscepticism, especially anti-Euro sentiments fostered by certain political parties, which the fake accounts are trying to promote.

The moderator then pointed out how the attention on this issue is often focused on Russia, but we mustn’t forget that other actors, especially extremist within EU member states or the USA, are also active in the proliferation of fake news. Some of them are also able to have financial gains from them, thanks to the increased traffic to their websites and the way online advertisement works. So how can we identify them and be effective against disinformation?

It’s not easy to cut an income stream from this people, in addition to the fact that many from the extreme want and like to share this news to promote their point of view. Aside from the bots, the people who share this news do it for the emotions they get from it or the emotions they want to emphasize. So what could we do? We are trying to communicate the message to the large audience that does not want to consume this kind of media. Some will say it’s wasted time since the people who do consume it already want to believe in them, nevertheless it’s important to stop it before the minority becomes the majority.

This news are easy to make, for example, while studying a Macedonian fake news farm, Channel 4 was highlighting how high operational costs were to find who was spreading disinformation, while even a teenager could easily Photoshop the Euronews logo onto anything, with little to zero cost. It is unlikely that this teenager would have any political idealism behind his move, but the accessibility made it easy for him to get some money with low effort. What we can do is keep promoting critical thinking in schools as well as with the adults. Media literacy is something the state should promote and some countries are indeed starting to implement some legislation about this.

The debate then moved to regulations. Recently the German state is moving towards a new regulation that will punish the platforms that fails to take action against fake news within 24 hours. This would mean hiring a permanent staff just to tackle this problem. So can we stop fake news through legislation or should we tackle it at the micro-level? The risk is to promote censorship and limit free speech so it’s a sensitive issue. The media prefer to remain self-regulated if possible, and limit how much the government can dictate the terms of information. Regulation at the EU level could be dangerous; maybe it would be better to do it at national level, since every member states have a different media situation. On the other hand the national government intervention could be a double-edged sword, since some member states are moving towards a more authoritarian approach, which would mean limiting the power of the oppositions by giving the government carte blanche to decide what is hate speech or fake news and what isn’t. There are in fact existing legislation that we could implement more effectively, all the while defending our values and freedom of speech. There are cases, like in Ukraine, where regulation has become a matter of national security but in general we cannot have a blanket solution that applies to everyone.

 

 

An interesting contribution from the audience called for investigating more in the process of algorithms doing editorial choices, and emphasizes the need of transparency. The comment praised the German “NetzDG” law because it makes social media sites accountable.

The speakers expressed diverse opinions on the issue. It is said the Facebook makes a lot of money without producing literally anything, while at the moment it has 30 people working on FakeNews for 2 billion users. Nevertheless, some changes have started during the US presidential elections. The Twitter has made three public announcements, clarifying the exact number of Russia-related accounts and showing that they retweeted Donald Trump’s contents 370 thousand times. These release show that social media sites are able to act if they feel the pressure for more openness. It’s not only that governments who should push them for change, but also the users and civil society organizations should advocate for more transparency on FakeNews and editorial choices.

At the same time, we shouldn’t forget that we voluntarily give our private lives to these companies and they voluntarily sell it to the advertisers. When we want to influence them, we should take it into consideration that they these social media sites want to make money at the end of the day.

Share on social media:
Close
loading...