back to top
Sunday, December 22, 2024

Careers

OpenAI Stops Election Influence Campaign Using ChatGPT

OpenAI has banned a cluster of ChatGPT accounts linked to an Iranian influence operation. That was generating content about the U.S. presidential election, according to a blog post on Friday. The company says the operation created AI-generated articles and social media posts, though. It doesn’t seem that it reached much of an audience.

OpenAI Previous Actions Against State-Affiliated Actors

This is not the first time OpenAI has banned accounts linked to state-affiliated actors using ChatGPT maliciously. In May, the company disrupted five campaigns using ChatGPT to manipulate public opinion.

Similarities to Social Media Influence Tactics

These episodes are reminiscent of state actors. Social media platforms like Facebook and Twitter to attempt to influence previous election cycles. Now similar groups (or perhaps the same ones) are using generative AI to flood social channels with misinformation. Similar to social media companies, OpenAI seems to be adopting a whack-a-mole approach, banning accounts associated with these efforts as they come up.

Microsoft Threat Intelligence and Storm-2035

OpenAI says its investigation of this cluster of accounts benefited from a Microsoft Threat Intelligence report published last week, which identified the group (which it calls Storm-2035) as part of a broader campaign to influence U.S. elections operating since 2020.

Storm-2035’s Influence Tactics

Microsoft said Storm-2035 is an Iranian network with multiple sites imitating news outlets and “actively engaging US voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.” The playbook, as it has proven to be in other operations, is not necessarily to promote one policy or another but to sow dissent and conflict.

Fake News Outlets and OpenAI -Generated Content

OpenAI identified five website fronts for Storm-2035, presenting as both progressive and conservative news outlets with convincing domain names like “evenpolitics.com.” The group used ChatGPT to draft several long-form articles, including one alleging that “X censors Trump’s tweets,” which Elon Musk’s platform certainly has not done (if anything, Musk is encouraging former president Donald Trump to engage more on X).

Social Media Activity and Limited Impact

On social media, OpenAI identified a dozen X accounts and one Instagram account controlled by this operation. The company says ChatGPT was used to rewrite various political comments, which were then posted on these platforms. One of these tweets falsely, and confusingly, alleged that Kamala Harris attributes “increased immigration costs” to climate change, followed by “#DumpKamala.”

OpenAI Minimal Influence of the Operation

OpenAI says it did not see evidence that Storm-2035’s articles were shared widely and noted a majority. Its social media posts received few to no likes, shares, or comments. This is often the case with these operations, which are quick and cheap to spin up using AI tools like ChatGPT. Expect to see many more notices like this as the election approaches and partisan bickering online intensifies.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here