(November 6, 2024 – Brussels)
Speakers:
- Laura Lazaro Cabrera, Counsel and Director of the Equity and Data Programme, CDT Europe
- Joe Elborn, Executive Director, Evens Foundation
- Nanna-Louise Linde, Vice-President European Government Affairs, Microsoft
Moderator:
- Shada Islam, Journalist and EU Commentator
Shada Islam opened the panel raising questions about how concerned we should be when it comes to Artificial Intelligence, and what its biggest risk could be. Instead of answering right away, Joe Elborn first asked the audience whether they consider AI more of an opportunity or a threat to democracy. Elborn clarified that he is not an expert, but sees into a lot of different processes. He highlighted two things: first, that there is a difference between a tool and an agent (the latter has some kind of autonomy), and second, that according to him, AI is not a tool, but an agent. He even thinks that the most interesting interaction is not between humans and AI, but between AI and AI, so the agent to agent interface.
Laura Lazaro Cabrera mentioned AI as a tool for disinformation, and deepfakes among the traditional fears. Cabrera’s biggest concern is that AI is able to generate targeted information, and it can profile people’s preferences, which can have a harmful impact. On the other hand, AI gives accessibility to information, which is a huge thing. But instead of overstating the benefits, we should really focus on research and improvement.
Nanna-Louise Linde shares the same fears as Cabrera, but underlined that there are reasons to be optimistic. AI can be used as a weapon or a tool, the important thing is to have it under human control. Interference and disinformation are not new, but AI can accelerate processes. Linde’s biggest concerns are deepfakes, and influence on voters and elections. She gave a recent example: days before the 2023 Slovak parliamentary elections, the pro-European candidate appeared to be bragging about rigging the elections. It might have been the first election swung by deepfakes, as it was not real, but the pro-European candidate still lost. Linde mentioned some benefits of AI too, regarding climate change and new discoveries.
Islam asked about how we can restore public trust in the midst of deepfakes. Elborn said that the antidote is belonging to a community. Research has shown that strong communities are more resilient. He mentioned the idea of libraries as new social hubs reimagining, recreating communities. He stressed that we should involve everyone in the discussions about AI and its effects: the corporate community, the political community and NGOs alike.
Cabrera highlighted that communication is key as it should translate into transparency. She assumes that political parties should disclose the AI tools they are using. There are obviously AI tools that people like, such as chatbots, but there are also some they resent, like too finely targeted advertisement. According to Cabrera, companies should provide base rules for the use of AI. Islam concluded that it comes down to taking responsibility.
Linde expressed that tech companies are part of the problem and part of the solution too. She talked about the tech accord between 27 companies about the use of AI in elections citing the three main points: protecting content authenticity; detecting (disinformation) and responding; and promoting awareness and resilience. Although the implementation of the accord is up to tech companies, Linde takes this seriously, as she was part of the team that had trained political parties on how candidates can recognize deepfakes and act on it.
Elborn underlined that thriving civil society is one of the foundations of democracy. According to him, social media ‘messed up’ civil society back in the days, but people paid more attention to not repeat that when it came to AI. There is still huge scepticism, people want to embrace AI, even though it is risky. Elborn thinks that it is crucial not to polarize ourselves, and as a result, turn against each other, which is the real risk.
Cabrera stated that the rise of AI safety groups is great, but civil society is still reckoning with the effects of the axing of many safety teams within big tech companies. But the European Union’s Digital Services Act will hopefully help the situation. Linde believes that regulations have an important role, but we need awareness-raising and education to have actual insights. She mentioned the white paper of Microsoft on the protection of women, children and seniors online and emphasized the obligations of tech companies.
Elborn talked about the asymmetrical problem of factcheckers, that they rely on AI, which comes with biases, but using it is still the only solution as the content is so huge. He stressed that we have to get the economies of AI right. Cabrera turned back to her hopes about the DSA which applies legislation uniformly, across borders, horizontally, vertically.
During the Q&A, Linde explained more in detail how AI can help with societal challenges, giving the example of maximizing the use of windmills in Denmark, and securing evidence of war crimes in Ukraine through satellite pictures. The energy use is still an issue, though tech companies should get carbon neutral by 2030, and they are investing in additional energy sources. Besides, cloud computing is still a more energy efficient solution than any other.
Cabrera is convinced that civil society has to unite forces and be more persuasive. Elborn said that NGOs know well that they punch above their weight, but having a contest of ideas is a good thing. The key would still be accountability and transparency.
Cabrera mentioned Article 50 of the European Union’s AI Act, which contains transparency obligations. The Act also has deadlines, such as for proposing fundamental rights authorities, which should help advance processes. Linde expressed the hopes of the European Union that its AI legislation would become global standards through the so-called Brussels effect, even though she is not sure about that.
Cabrera underlined that we have to decide how serious we are about preserving our values. According to Elborn, we have the opportunity to set the rules of digital use during elections, and he suggested to have a break from targeted advertising right before elections.
Cabrera’s last message was to not lose sight of the objectives that drove us in the first place. Linde stressed that no one could solve AI issues at home alone, it requires joint effort and real actions. Elborn concluded that what we need is the willingness to work together paired with the best intentions.