In light of this year's upcoming elections in the US and other countries, there are growing fears that advances in artificial intelligence could be used to significantly influence the upcoming elections.
An estimated 2bn people, or around half of the world’s adult population, are expected to head to the polls in 2024, including in the US, the EU, India and the UK. - Financial Times (2024)
In particular, the risks of deliberately spreading misinformation through fake videos, audio messages, and targeted propaganda campaigns are in focus. In this issue of Innovative AI, we will take a closer look at this topic, examine the threats, and outline what is already being done to counter them.
The threat of disinformation in elections is not a novel concern. Historically, elections have been susceptible to various forms of misinformation, from propaganda to outright falsehoods. However, the integration of AI technologies has escalated the potential impact of these threats. AI's ability to generate convincing fake news, deepfakes, and tailored propaganda at an unprecedented scale and speed has raised alarms among experts and policymakers.
Major media organizations, such as the Guardian, have already indicated that a shift is taking place from traditional misinformation tactics to more sophisticated AI-driven methods, with the misinformation-driven 2016 and 2020 US presidential elections serving as a prelude to the more advanced strategies that AI could enable in the future. - for further reading also have a look at the Guardian (2023a)
AI's role in elections encompasses a broad spectrum of activities, from generating fake news and deepfakes to automating social media bots that mimic real voters. Further, AI can create photorealistic images, mimic voice audio, and produce convincingly human text. All of these capabilities not only deceive audiences but also erode the existing information ecosystem, making the job of journalists and fact-checkers increasingly challenging.
One of the most concerning aspects of AI in elections is the democratization and acceleration of propaganda. AI tools have made the creation of misleading content accessible to anyone with basic digital skills, amid limited regulation. This ease of content creation, coupled with the potential for widespread dissemination, creates a fertile ground for misinformation to flourish.
Just imagine having an army of people who are willing to deliberately spread misinformation, who don't do this on a small scale by going from house to house or posting on their social media channels, but instead generate fake content in a fully automated way and spread it via many automated bots.
Another aspect that must be taken into account is the improved possibilities of foreign influence in elections. AI reduces language barriers thereby improving the credibility of content, making it easier for external actors to create content in the respective language needed.
In fact, a study conducted by the World Economic Forum in 2023 and 2024, which was published in the Financial Times, found that "AI-generated misinformation" was the second biggest threat among respondents for 2024.
Addressing the threats posed by AI in elections requires a multifaceted approach which opens up the question of how big tech companies plan to help.Knowing that X formally Twitter is increasingly reducing the possibilities to flag misinformation (thereto also have a look at this Guardian article (2023b)), we want to have a look at how other major players are addressing this threat.
According to an article in the Financial Times, experts fear that many big tech companies are less well equipped to tackle disinformation on their platforms as they have cut investment in teams dedicated to maintaining safe elections following the tech stock downturn in early 2023, e.g. Meta. X has also slashed content moderation resources in favor of what Elon Musk calls "free speech". - thereto also have a look at this Guardian Article (2023b).
The Financial Times also reports that efforts by US-based tech companies to invest in fact-checking and combating misinformation are even being politicized, “as rightwing US politicians accuse them of colluding with the government and academics to censor conservative views” - Financial Times (2024)
Meta and Google recently announced guidelines that require campaigns to disclose whether their political advertising has been digitally altered. TikTok stipulates that synthetic or manipulated media depicting realistic scenes must be clearly disclosed through a sticker, label, or caption. In addition, media that violates the likeness of real private or public persons will be banned if it is used for advertising purposes, such as fraud. Even X, says it will either flag or remove misleading manipulated media if it causes harm, including uncertainty over political stability - for further reading also have a look at this article form The Wall Street Journal (2023).
In a recent blog post, OpenAI emphasizes the need for collaboration and the implementation of security measures to prevent the misuse of AI technologies. Ensuring the integrity of elections in the face of AI-driven disinformation means disseminating accurate election information, enforcing policies, and improving transparency. Accordingly, OpenAI is actively working to prevent the misuse of its technologies in elections. As described in OpenAI's blog, the company is implementing measures such as publishing accurate election information and enforcing policies against the creation of misleading AI-generated content.
Microsoft CEO Satya Nadella also acknowledges the risks of AI, but remains optimistic about developing technical solutions to combat disinformation. Yahoo Finance (2024).
The international community is increasingly recognizing the need to address the challenges posed by AI in safeguarding democratic processes. Thereto, this topic is also being discussed in Davos right now, reflecting the growing global consensus on the urgency of this issue.Eventually, we are confident that we will soon be able to provide updates on the measures being taken to protect elections. To what extent these will be far-reaching enough remains to be seen.Nevertheless, they should be implemented quickly. Especially in light of Bret Schafer, propaganda expert at the Alliance for Securing Democracy, assessment that "You can get people to the point of, 'Voting doesn't matter. Everybody's lying to us. This is all being staged. We can't control any outcomes here.' That leads to a significant decline and civic engagement." - Financial Times (2024)
Contact us today to learn how we can bring your ideas to life with our custom-built AI solutions!