safety

OpenAI exposes criminals

OpenAI has stopped five influence operations that were using ChatGPT to spread misinformation and propaganda

Martin Crowley
May 31, 2024

OpenAI has discovered and stopped five covert influence operations (known as IOs), using ChatGPT to spread misinformation and propaganda to influence and manipulate public opinion.

The purpose of the IOs

The five IOs–operating from Russia, China, Israel, and Iran–were using ChatGPT to conduct open-source research, debug simple code, and generate propaganda content, in a range of languages, which they were then posting on social media, under fake accounts.

The Chinese malicious group, known as "Spamouflage”, created content in English, Chinese, Japanese, and Korean that slammed critics of the country, and posted it on the blogging site, Medium, and social media platform, X (formerly Twitter).

Two Russian groups, known as “Bad Grammar” and “Doppelganger” targeted people in the Ukraine, Moldova, the Baltic States, and the United States with damaging messages on Telegram.

The IO from Iran, called the "International Union of Virtual Media", generated articles (that were translated into English and French) that attacked the US and Israel.

And the malicious actors from Israel, called "STOIC", created fake social media accounts and generated damaging and misleading posts about the Gaza conflict.

The IOs impact

OpenAI reported that none of the malicious campaigns gained traction or reached large audiences, largely thanks to human error which exposed the content as AI-generated: For example, Bad Grammar accidentally included ‘ChatGPT refusal messages’ in their content, so viewers could see it was fake.

They reassured the public that the IOs only rated 2 on the ‘Brookings Breakout Scale’ (which measures the impact of these types of manipulation schemes), which means that the “fake content appeared on multiple platforms, with no breakout to authentic audiences.”

(The scale goes up to 6, which “provokes a policy response or violence”)

While these particular campaigns had little effect, it does show how easy it is for criminals to exploit AI technology, like ChatGPT, highlighting the need for tech companies, like OpenAI, to put stricter safeguards and protocols in place to restrict  use.

OpenAI has declared that it will release similar reports on these types of IOs and will always remove accounts that violate its policies.