Loading...

Tag: Digital Policy

EU Digital Services Act: grasping the opportunity to refresh the resilience of our European democracy

  • June 2020
  • Ioanna Karagianni
EU Digital Services Act: grasping the opportunity to refresh the resilience of our European democracySource: Future of Privacy Forum

Online platforms have gained a dominant role in our lives and in our democracies. We do business online, we get informed through online platforms, we buy products, organize events and exchange views on topics of our interest.  There is, at the same time, a need to better accommodate to this new reality while we maintain our privacy, dealing with fake news, and curbing risks and challenges that our new, digital reality generates . Shall the EU grasp this as an opportunity to refresh European democracy or will it let it pass?

 

Along with other flagship initiatives such as the Green Deal, in her political guidelines and in the European Commission (EC) Communication ‘Shaping Europe’s Digital Future’, EC President Ursula von der Leyen described her plans to grasp  what opportunities  digital technologies create. She talked about how Artificial Intelligence (AI), the Internet of Things and 5G are fast transforming the world and our everyday lives and how the General Data Protection Regulation (GDPR) can act as an example of success story.

 

The February 2020 EC Communication ‘Shaping Europe’s Digital Future’ defined the  Commission’s intentions to create the conditions and deploy capacities to become a leader in ensuring the integrity and resilience of data infrastructure, networks and communications. It expressed the plans for Europe to be self-defined, but, at the same time, remain open to collaboration with parties willing to play ‘by European rules and meet European standards’[1] and to shape global interactions.

 

The Digital Services Act package is the concrete plan which “will upgrade our liability and safety rules for digital platforms, services and products, and complete our Digital Single Market.”[2] The February EC Communication announced actions under the heading of the Digital Services Act in two parts: one dealing with online liability and safety and with the harmonized regulations for the digital single market, based on the logic of the E-commerce Directive around illegal and harmful content- envisaged to result in an EC proposal for new and revised rules by the end of the year. The second part of the package  creates some ex-ante rules[3] regarding, for instance, contestability issues for some large platforms.

 

This aims to upgrade online governance, a much needed actas the  policies in place date back to some 20 years, It sets up common rules to expand the liability of digital platforms and online advertising services in the EU and enacts measures to limit the spread of online hate speech and disinformation. Lately, the EC also launched a public consultation on the Act to gather opinions from  businesses, individuals, and the civil society. But are these plans fit to boost the resilience of European democracy against emerging threats, such as disinformation?

 

The current E-Commerce Directive includes standard rules for the intermediaries’ liability in content sharing. If the EU aims at creating and implementing an ambitious plan, revising the responsibility for online content should be given greater importance. Private companies which are part of the content dissemination chain could, for instance, be given more  incentives and safeguards to work on liability,  in co-operation with the regulators. Equally important is the need of the latter to adopt clearly defined concepts regarding illegal information, misinformation and hate speech.

 

Human rights, such as the freedom of speech, need to be equally protected both offline and online. This is for a plain reason: since it is easier for platforms to deal with online material by taking it down, there is a proven tendency in the current regulations to accommodate to this (e.g. in the copyright regulation). For the moment, according to the current regulations and their definition of liability, a platform is either not responsible at all for their online content or they have an diting responsibility. This raises issues as  many online platforms  tend to boost certain types of content  and to hide other types. Thus, there is the need to create a regulation that combines balancing rights and liability. Furthermore, in order to  successfully promote a competitive EU in the digital economy, the EU should increase the transparency of online platforms, hold companies behind the platforms responsible for removing illegal content, and create regulatory consistency by harmonizing EU rules.

 

A whole new digital world has been created in the recent years and this is the future. The successful implementation of the Digital Services Act is not only an opportunity to upgrade the policies on online platforms, but it will also modernize the European economy and help to refresh the resilience of our European democracy vis-à-vis hate speech and misinformation.

 

[1] European Commission, Communication ‘Shaping Europe’s Digital Future’, February 2020, < https://ec.europa.eu/info/sites/info/files/communication-shaping-europes-digital-future-feb2020_en_4.pdf>

[2] European Commission, A Union that Strives for More: My agenda for Europe, p.13, < https://ec.europa.eu/info/sites/info/files/political-guidelines-next-commission_en_0.pdf>, accessed online on 10 June 2020

[3] European Commission, Communication ‘Shaping Europe’s Digital Future’, February 2020, < https://ec.europa.eu/info/sites/info/files/communication-shaping-europes-digital-future-feb2020_en_4.pdf>

 

Share on social media:

Why can’t you be more like Estonia? : Creeping Illiberalism on the European Internet

  • February 2020
  • Hannah Bettsworth

Why can’t you be more like Estonia?

 

Source: Pixabay

Creeping Illiberalism on the European Internet

 

At the recent Masters of Digital conference, one of the panels looked ahead to the future of the digital world in 2040.  One of the topics arising was the future of online freedoms 20 years from now. The Internet Society’s Regional Vice President Europe, Frederic Donck stressed that we need to respect internet freedoms in addressing internet policies, we should not discard them while addressing other issues. In English, we would call this ‘not throwing the baby out with the bathwater’. In other words, don’t throw out something good while attempting to get rid of a problem.

 

The UK is not paying much attention to this concept of digital freedoms. In a previous CFEP blog, Otto Ilveskero told a cautionary tale about overstepping the mark on regulating fake news: “regulators can sidestep much of the accusations regarding silencing the freedom of speech[…] by regulating not offensive content, but content that is harmful to the safety and wellbeing of others.” With its Online Harms White Paper, the UK has used the same language about protecting the safety and wellbeing of others, but as a defence of giving the state more power to decide what is harmful ‘enough’ to require regulation.

 

It seeks to create a new regulator to handle disinformation, trolling, selling illegal items, non-consensual pornography, hate speech, harassment, promotion of self-harm, and content uploaded by people in prison. Said regulator would be able to issue fines or hold senior management staff liable for breaches of its codes of practice. Notwithstanding that this is a very long and varied list of issues, there’s another, similarly idiomatic, issue. The road to hell is paved with good intentions.

 

The promotion of self-harm category is a case that sums up the delicate issues of freedom and of protection at play online. Instagram has increased its efforts to remove material relating to self-harm and suicide following the death of a British teenager. That had an unintended effect on other young women worldwide, who were finding photographs of themselves removed because their scars were visible. For them, that was a value judgment that their appearance was unacceptable and potentially damaging to others. Ethical debates can be had to weigh up these kinds of issues, but when we talk about online content filters, that nuance is gone. All an algorithm can tell you is that it has detected a picture containing scars.

 

It is not just Britain that has rushed to solve social ills without considering the nuances and the potential impact on the online freedoms we have all come to cherish. France has enacted a law to tackle ‘fake news’. In short, it allows judges to order the removal of articles within three months of an election. Upon getting a report from a public prosecutor, political party, or individual, the judge has 48 hours to decide whether it constitutes ‘fake news’. To qualify, it must be obviously false, deliberately spread on a large-scale, and lead to a disturbance of the peace or the fairness of an election.

 

Sounds reasonable? Imagine such a law in the hands of a country where the judiciary has been captured by the ruling party. Where the leader has a tendency to deem any inconvenient story to be manifestly untrue, to be spreading because of the media and the political opposition, and to suggest that these actions are fuelling violence. When moderate politicians grant themselves additional regulatory powers, they should consider whether they are truly comfortable with what those powers would look like in the hands of populists.

 

On an entirely unrelated note, Hungary has also been known to use otherwise innocuous measures in negative ways. Privacy rights are more important than ever in a digital age, where everything we do can be filmed and everything we say can be recorded and held against us. Viktor Orbán’s son-in-law, István Tiborcz, may have had a similar thought when the Supreme Court ruled news site 444.hu violated his privacy rights – as a non-public figure – by publishing a video interview of him without his explicit consent. Tiborcz’s company had contracts with local governments that the 444 and other investigative media outlets found suspicious. Potentially, as a man facing corruption allegations, he may have been even more relieved about limiting his exposure to public scrutiny.

 

Even Germany is not immune to this trend of creeping illiberalism cloaked in defending rights or resolving social problems. Its domestic law, the Network Enforcement Act (NetzDG), is arguably the precursor to EU efforts in tightening up content removal rules. The NetzDG requires removal of ‘obviously illegal’ content in under 24 hours, and within seven days for non-obviously illegal content. An entire satirical magazine lost its Twitter account, albeit temporarily, to NetzDG enforcement. Companies have an incentive to remove first and ask questions later: they might get fined up to €50m for non-removal, and incorrect removals have no real consequences.

 

We see the same policies creeping on to the EU stage with the Regulation on Preventing the Dissemination of Terrorist Content Online (TERREG) and the Copyright Directive. TERREG goes hand-in-hand with online filters, with the same problems regarding algorithms and context. A key example relates to the wrongful deletion of YouTube videos that served as evidence of war crimes in Syria. The Copyright Directive also raises the spectre of algorithms detecting violations and is out of touch with a new wave of young voters who want to see better from their European Union.

There’s a country getting it right, according to Freedom House. Estonia, keeping up its good reputation for all things digital, has protected its people’s online freedoms. They punish truly harmful actions. If someone incites hatred, violence or discrimination on the basis of particular characteristics they can be fined. If those actions lead to death, negative health impacts, or other serious results, they could find themselves spending up to 3 years in an Estonian jail. Any blocks that do exist mostly revolve around unlicensed gambling sites. They opposed the Copyright Directive and are using their technological reputation to lead on cybersecurity efforts.

Overall, the Internet does bring both strife and success. They are fundamentally intertwined. As long as there are debates online, there will be bad-faith participants. As long as e-commerce exists, e-criminals will too. If governments fail to balance freedoms strongly enough, they will throw out the heart of the internet and may merely be left holding a bucket of straggling criminals.

Share on social media:

Counter-Terrorism: We Need To Talk About The Far-Right

  • August 2019
  • Hannah Bettsworth

Counter-Terrorism: We Need To Talk About The Far-Right

Western countries must start taking the threat from far-right terrorism as seriously as they do that of Islamist terrorism

Source: Pixabay

Defining terrorism is, perhaps surprisingly, complex and fraught with disagreements. We could assume we know terrorism when we see it, and so, it appears clear that the recent gun attacks in New Zealand, the USA, and Norway were terrorist attacks.  The Global Terrorism Database defines a terrorist act as an intentional act or threat of violence by a non-state actor, as well as two out of three of the following criteria: the act aimed to further a political, economic, social or religious goal, there was evidence of intent to coerce, intimidate or spread some other message to an audience beyond the immediate victims, and the act was outside of International Humanitarian Law.

Clearly, the recent attacks fit with this definition. These kinds of attackers have often been classed as ‘lone wolves’: as self-radicalised criminals who do not form part of a wider movement. Classifying them in such a way risks underestimating the threat. For example, in 2017 Europol figures, only 3% of failed, foiled, or successful terrorist attacks are attributed to the extreme right.  It is not the biggest threat to European security: the 2019 edition of the same Europol report confirms that ethno-nationalist and separatist terrorism continues to be the most common source of attack attempts. However, the way it is recorded likely underestimates the prevalence of far-right terrorism. In some Western countries, it is easier to prosecute under criminal law than terrorism law – particularly hate crime legislation – and so far-right violence which would otherwise qualify as terrorism is not always reported as such. This can also have the impact of making its victims feel as if they are not taken as seriously as other terror victims.

It is understandable that they could feel that way. Technical measures, such as those used to near-eradicate Daesh propaganda from mainstream social media, have unintended consequences as a result of the limitations of artificial intelligence and machine learning and so need human oversight to correct false positives.

 

There has not been the same action against white nationalist propaganda online – even though the above-mentioned attacks stem from this ideological background. Its use of memes and misdirection alongside extremist rhetoric makes it hard for digital tools to detect, and the similarity of Trump’s rhetoric to that of white supremacists (particularly when discussing migrant ‘invasions’) means that the false positives may affect alt-right media personalities and conservative American politicians.

That, in itself, is a sign of a wider ideological current underlying these attacks. Apportioning direct blame for a particular violent act may be impossible and unwise, but the wider political discourse can and does contribute to radicalisation. It is not limited to the United States, either. The key theory which links these attackers is that of the ‘Great Replacement’, which initially came from a French author and is predominantly discussed online in French. It argues that there is a concerted effort on the part of elites – often linked by the far-right to the common anti-Semitic conspiracy theory that a Jewish elite controls the world – to replace white people through immigration and abortion policies. This often involves a ‘crisis narrative’ in which migration is described as a threat to the very existence of white people. It is an abhorrent theory in itself, but it is that rhetoric in particular which lends itself to radicalisation and terrorist acts in the name of a ‘race war’. In this regard, Islamist and far-right terrorism lend succour to each other.

What, then, can be done about it? Although alt-right activists like to hide behind a free speech defence (and would do so loudly if they became collateral damage of a social network crackdown on white supremacist ideology), there are indeed civil liberties concerns bound up in the use of AI and machine learning in content removal. As the CEO of Cloudflare signalled in his statement about ending their services to 8chan, being the arbiter of social boundaries is an uncomfortable role for technology companies and is also not sufficient to stop radicalisation and terror attacks. That requires the political will to investigate and to combat far-right terrorism with the same vigour as Islamist terrorism, both through law enforcement and through strong opposition to its ideas.

Donald Trump is a particular obstacle in this regard. He is content to permit white supremacists to operate at arms-length as part of his base, and continues to use language that incites racism and violence while failing to effectively tackle the threat with counter-terror policies. Europe fares somewhat better, with raids against a Generation Identity group which took a donation from the Christchurch attacker and the UK’s criminalisation of membership and support of a neo-Nazi group.

 

However, leaders who have deployed far-right rhetoric must consider their responsibilities as public figures. They should be wary of reinforcing theories, such as the ‘Great Replacement’, which have the potential to inspire copycat terrorist attacks. They should also fully participate in the battle of ideas against extremism, making it very clear to their supporters that battles must be won at the ballot box and not with weapons.

 

Countering Violent Extremism policies should not be perceived to focus solely on Islamism but provide for deradicalising people from various ideological backgrounds. Taking far-right terror as seriously as Islamist terror requires difficult discussions about the role of anti-immigration rhetoric in radicalisation pathways. If political expediency is used to maintain the status quo, as Bellingcat’s Robert Evans states,”[t]here will be more killers, more gleeful celebration of body counts[…], and more bloody attempts to beat the last killer’s “high score”.

Share on social media:

Create, Connect, Engage: Digital Campaigning & Cybersecurity

  • May 2019
  • Daniela Floris

Create, Connect, Engage: Digital Campaigning & Cybersecurity

More than ever before, social media and digital tools will impact the 2019 EP elections.

David Timis, on the left, and William Echikson, on the right, with a participant. Photo: Joseph Cochran

 

As the European Parliamentary Elections are approaching, candidates and political strategists have less than a month left to engage with citizens, mobilize their base and appeal to swing voters.Internet hacks and disinformation represent concrete threats to electoral runs-up, with “fake news” seen as a problem for democracy by 83% of Europeans.

In fact, according to the Parliament’s latest Eurobarometer survey, one third of Europeans of voting age are are reported not planning to vote, believing that their vote “won’t change anything”. Much of Institutional and political communication tend to be run on social media, but turning online interaction into active participation remains challenging. Initiatives to mobilize voters, such as #ThisTimeImVoting and #EUandMe, launched by the European Parliament to present information on the election process and to promote the achievements of the EU, have been received positively by users and more traditional media. However, we will have to wait until ballots are cast for an impact assessment on the voter turnout, which was only 42,6% in 2014.

How to run a successful digital election campaign was the subject of a training session organized by the Centre for European Policy Studies (CEPS), a Brussels-based think tank for EU affairs. The main speaker was David Timis, Google’s EU civic outreach fellow, and co-founder of European Heroes, a platform for the civic engagement of young Europeans. The event was moderated by William Echikson, CEPS’ Head of Digital Forum, Europe’s correspondent for the Wall Street Journal for decades, and former Senior Manager at Google. During the workshop, Timis shared a few but precious tips on how to establish a digital audience and counter cyberattacks.In fact, according to Timis, effective digital media campaigns are able to build a strong brand with clear-cut messages, an attractive storytelling of the actors involved and a straightforward mission statement. The goal is not only to attract views, but to connect with potential voters or activists and, ultimately, call for action.

 

Mobile Friendliness

Surfing the internet while on the move is a consolidated habit for the most of us. A Eurostat poll show that 65% of Europeans aged 16-75 use mobile devices to access the internet, proving that TV is no longer the main medium for audiovisual material. YouTube, the video streaming platform acquired by Google in 2006 for USD 1.65 billion, has over 1.8 billion monthly users . Streaming and sharing videos have become increasingly popular, with extensions like “live” and “stories” incorporated in various social networking platforms. Establishing a YouTube channel is now recommended for businesses and organizations, especially if they wish to attract a young viewership.

Contrary to popular belief, Timis pointed out, producing viral videos does not necessarily require a big budget: filming with a smartphone gives the public a glimpse of spontaneity and casualness. Accordingly, fast paced storylines and visual close-ups mimic real time human interaction, instilling audiences with the perception of being participants rather than just viewers. Successful campaigns, such as the one of Alexandria Ocasio-Cortez, now a Member of the House of Representatives in the US, prove that a multimillion-dollar funding is not indispensable: support from a tech-savvy team with a sharp social media strategy can lead to outstanding results. Charisma, creativity and a deep understanding of your target of reference all pay back in terms of popularity.

Other formats, including “behind the scenes” content, videos featuring supporters or volunteers and being endorsed by influencers are also important in the communication strategy.

 

Cybersecurity

As the distribution of media has become more and more related to the internet, the likelihood of sabotage through cyberattacks has increased. Political Campaigns are particularly exposed to such threats, with the Brexit referendum and the 2016 U.S Presidential Election being the most notorious cases. Attempts of hacking and “phishing” (the fraudulent attempt to steal information posing as a trustworthy entity) have become the most common attacks, but, according to Timis, they can be prevented.. The Cybersecurity Campaign Playbook , published by the Harvard Kennedy School, and quoted by Timis, suggests that raising awareness is paramount to counter malicious attacks. Simple measures, such as using long and different passwords for different accounts, storing sensitive information in web clouds and communicating through encrypted apps could make the difference. Other risk management tools include using a password manager and two-factor authentication (2FA), in which the second step of log-in relies on dedicated apps, safer than text messaging, or on security keys. Several of these tools are free or low-cost, making media literacy a fundamental asset for campaigners and social media managers to disallow data leaks and privacy breaching.

 

A digital future

When asked what the next big thing for the digitalization of Europe was going to be, Timis had no doubt: “They are already here: mobile and video have taken over the way we communicate and connect to each other, bringing parties, organizations and movements closer to the people they interact with.”, William Echikson, on the other hand, in virtue of his decades-long experience in reporting on EU affairs and policy research was more cautious: “for 30 years the Internet has had a sort of free pass. Recent scandals have shown that a more organic approach to regulations and policymaking needs to be implemented. It is a long way to go and it won’t be easy, but we are getting there”.

 

 

Share on social media:
Next >>
Close
loading...