OpenAI Terminates Election Influence Campaign Utilizing ChatGPT

OpenAI has taken action against a group of ChatGPT accounts associated with an Iranian influence operation that was producing content related to the U.S. presidential election, as outlined in a recent blog post. The organization reported that this operation generated AI-driven articles and social media posts, but it appears to have had a limited reach.

This incident marks another instance in which OpenAI has suspended accounts connected to state-affiliated entities misusing ChatGPT for harmful purposes. In May, the company disrupted five separate campaigns aimed at manipulating public opinion through the platform.

These incidents echo the tactics employed by state actors in past election cycles, who resorted to platforms like Facebook and Twitter to exert influence. Now, similar groups—possibly even the same ones—are leveraging generative AI to inundate social media with misinformation. OpenAI's response mirrors that of social media companies, implementing a whack-a-mole strategy to ban accounts as they emerge.

OpenAI's investigation into this particular cluster of accounts was informed by a Microsoft Threat Intelligence report released last week, which labeled the group "Storm-2035." This network is reportedly part of an ongoing effort to sway U.S. elections that has been active since 2020.

According to Microsoft, Storm-2035 is an Iranian network operating multiple sites designed to mimic legitimate news outlets. It actively engages U.S. voter groups across the political spectrum, disseminating polarizing messaging on topics such as presidential candidates, LGBTQ rights, and the Israel-Hamas conflict. The intention appears to be less about advocating specific policies and more about fostering discord and division.

OpenAI discovered five websites associated with Storm-2035 impersonating both progressive and conservative news platforms, utilizing credible domain names like “evenpolitics.com.” The group employed ChatGPT to craft several in-depth articles, including a piece wrongly claiming that “X censors Trump’s tweets,” a statement not supported by evidence—if anything, Elon Musk is encouraging the former president to be more active on the platform.

The company also identified a dozen X accounts and one Instagram account managed by this operation. ChatGPT was utilized to rephrase various political comments before distributing them on these platforms. One misleading tweet claimed that Kamala Harris attributed “increased immigration costs” to climate change, accompanied by the hashtag “#DumpKamala.”

OpenAI observed no significant evidence that Storm-2035's articles gained traction, noting that most of its social media posts garnered minimal likes, shares, or comments. Such scenarios are common in these operations, which can be rapidly established with AI tools like ChatGPT. As the 2024 election draws near and online partisan conflict escalates, we can expect to see more announcements like this.

Most people like

Find AI tools in YBX