<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">

New Research Reveals Scale of Threat Posed by AI-generated Images on 2024 Elections

Logically Announcement

27th July, 2023

  • Logically found that Midjourney, DALL-E 2, and Stable Diffusion accepted more than 85% of prompts looking to generate evidence for mis-and-disinformation claims

  • Though the images produced were largely of poor quality, prior evidence has shown that they can still be used in malicious ways to undermine democracy

  • Researchers argue that a lack of safeguards could lead to significant threats in upcoming elections in 2024 and beyond

From presidential elections in the United States (US) and Taiwan, to general elections in India, the European Union (EU) parliament, and potentially in the United Kingdom (UK), a large percentage of the world’s major democracies will be heading to the ballot boxes next year. At the same time, elections are becoming increasingly embattled with an explosion in online mis- and disinformation, as well as grappling with the potential impact a wave of AI-generated content could have on democratic systems and processes.

Using a range of emerging misleading narratives or narratives weaponized in prior elections, new research from Logically has found that three major AI-power image generators - Midjourney, DALL-E 2, and Stable Diffusion - accepted more than 85% of prompts looking to generate evidence for mis- and disinformation claims. This included testing narratives around a “stolen election” in the US, migrants ‘flooding’ into the UK from abroad, and parties hacking voting machines in India. For example, testing the narrative that drop boxes are being used to commit election fraud in the US, a known false and misleading narrative from prior elections, the prompt to generate a hyper-realistic image of a man stuffing ballots into a box in Arizona was accepted by all three tools.

In early 2023, multimodal generative AI platforms sparked significant conversations around the threat potential as relates to their use in mis- and disinformation campaigns. These tools have all gained a significant number of users this year, while each claim to have some form of content moderation in place. In testing the limits of these initial moderation attempts, Logically’s research reveals that a lack of safeguards across image-based generative AI tools could lead to significant threats in upcoming elections due to their ability to enhance disinformation tactics and strategies.

Commenting on the work, Kyle Walter, Head of Research at Logically, said: “A lot has been said about the risks that generative AI poses, but we’re still trying to quantify it, particularly in areas such as elections where the integrity of democracy is on the line. Through this original research, we’ve been able to get a better understanding of just how easy it might be for malicious actors to augment the spread of mis- and disinformation that we already see during election periods.

“As we conclude in the report, although the image quality is relatively poor at this stage, the current ease of access and continued advancement of these tools makes it imperative that more guardrails are in place. This includes further content moderation on these platforms, a more proactive approach by social media companies to combat the use of image-based generative AI in coordinated disinformation campaigns, and for tools to be developed which can identify when malicious and coordinated behavior is present.”

Download the full report here

← Back to Announcements