OpenAI has revealed disrupting five covert influence operations, originating from Russia, China, Iran, and Israel, which aimed to misuse AI models for generating deceptive content. Despite creating comments, articles, and profiles, the operations did not significantly boost audience engagement, thanks to OpenAI’s AI safeguards and collaborative approach.

OpenAI, the developer of ChatGPT, reported on May 30, 2024, that it has disrupted five covert influence operations in the past three months. These campaigns, originating from Russia, China, Iran, and an Israeli private company, sought to utilize OpenAI’s artificial intelligence models to generate deceptive content.

The company revealed the operations in detail, noting Russia’s “Bad Grammar” campaign targeting Ukraine, Moldova, the Baltics, and the United States, and the well-known “Doppelganger” operation that generated comments across multiple languages. China’s “Spamouflage” operation used AI for social media research and website content. An Iranian group, “International Union of Virtual Media,” used AI to create content for state-linked websites, and a commercial Israeli company named STOIC generated content across major social media platforms.

While these operations used AI to create comments, articles, profiles, and debug code, OpenAI stated they did not significantly enhance audience engagement. The company attributes the disruption success to its AI safeguards, intelligence sharing, and collaborative efforts.

Share.

Ivan Massow Senior Editor at AI WEEK, Ivan, a life long entrepreneur, has worked at Cambridge University's Judge Business School and the Whittle Lab, nurturing talent and transforming innovative technologies into successful ventures.

Leave A Reply

Exit mobile version