Politics

ChatGPT disrupting global elections?

OpenAI has disrupted over 20 foreign-influence groups, that have been using their AI to influence opinion ahead of elections

Martin Crowley
October 11, 2024

In its latest 54-page quarterly threat report, OpenAI disclosed that, since the beginning of the year, it had discovered and managed to disrupt over 20 campaigns, run by foreign actors, that were using its AI technology—namely, ChatGPT and Dall-E—to influence public opinion ahead of global elections. 

These groups—which mainly came from countries like China, Russia, Iran, Rwanda, and Vietnam—were using ChatGPT and Dall-E to generate websites, articles, and social media profiles, respond to messages, and other similar tasks to support their efforts to sway public opinion.  

But while OpenAI acknowledged that these bad actors were able to “experiment” with its AI technology, they were unable to successfully spread their influence: 

“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences”

OpenAI’s findings align with statements made by US officials, earlier this year, who warned that “foreign actors are using AI to more quickly and convincingly tailor synthetic content,” and consider AI to be “a malign influence accelerant, not a revolutionary influence tool.” 

Recognizing this, OpenAI has built new AI-powered tools, designed to detect these malicious groups, which have cut down time spent on analytical tasks from “days to minutes”.