OpenAI, the creator of AI models like ChatGPT, has thwarted five covert influence campaigns originating from Russia, China, Iran, and an Israeli private company by preventing deceptive use of its technologies for generating false content and profiles.
OpenAI, the company behind AI models like ChatGPT, announced on May 30, 2024, that it has disrupted five covert influence campaigns over the past three months. These operations originated from Russia, China, Iran, and an Israeli private company. The actors attempted to use OpenAI’s technologies for deceptive purposes, such as generating comments, articles, social media profiles, and debugging code.
The disrupted campaigns included Russia’s “Bad Grammar” and “Doppelganger,” targeting regions like Ukraine, Moldova, the Baltics, and the U.S., as well as generating multi-language content on platforms like X. China’s “Spamouflage” operation utilized AI for social media research and website content generation. Iran’s “International Union of Virtual Media” produced articles and headlines for state-linked websites. An Israeli company named STOIC was found creating content across various social media platforms.
OpenAI emphasized that these disruptions were possible through collaboration, intelligence sharing, and built-in safeguards within its models. The company noted that none of the campaigns achieved significant audience engagement.