OpenAI has unveiled a new tool to identify deepfake images from its AI system, DALL-E, with an accuracy rate of 98.8%. The technology is part of efforts to address AI’s impact on elections, with plans to collaborate with disinformation researchers and join a tech coalition to combat misinformation.
OpenAI has developed a new tool aimed at detecting deepfake images generated by its AI system, DALL-E. The tool, which can accurately identify 98.8% of images created on DALL-E 3, is part of efforts to mitigate the potential impact of artificial intelligence on elections and society. The technology was unveiled less than a year after OpenAI CEO Sam Altman expressed concerns about AI’s influence on democratic processes.
To further enhance the tool’s effectiveness, OpenAI plans to collaborate with disinformation researchers to test and refine the technology in real-world scenarios. However, the tool is currently limited to detecting images from DALL-E 3 and does not cover other AI image-generating programs like Midjourney and Stability.
Additionally, OpenAI has joined a coalition that includes major tech firms such as Google, Microsoft, Amazon, Meta, TikTok, and X, to combat misinformation more broadly. This coalition aims to create standards for digital content authenticity, including a ‘nutritional label’ for digital media to indicate how content has been derived or altered.
These developments occur in a broader context where AI-generated deepfakes have already impacted elections in several countries and caused widespread concern about the authenticity of digital content. Key tech players and researchers continue to seek approaches to ensure transparency and reliability in AI-generated media as the next U.S. presidential election approaches.









