Social media giants like TikTok, Meta, and YouTube are taking steps to help users differentiate between human-generated and AI-generated content as concerns over the potential impact of AI on democratic processes grow. OpenAI has also joined the initiative by enabling users to identify AI-generated images and launching a fund with Microsoft to tackle election-related deepfakes.
As the November elections approach, major social media platforms are implementing measures to help users distinguish between human-generated content and that produced by artificial intelligence. TikTok recently announced it would start labeling AI-generated content. Similarly, Meta (which owns Instagram, Threads, and Facebook) revealed plans last month for similar labeling, and YouTube now requires creators to disclose if videos are AI-created, allowing for appropriate labeling.
OpenAI, creators of ChatGPT and DALL-E, disclosed its initiative to enable users to identify AI-generated images, alongside launching a $2 million fund with Microsoft aimed at combating election-related deepfakes.
These actions by social media giants come amid concerns over AI-generated imagery’s potential to deceive and affect democratic processes, highlighted by an incident where an AI-created image misled people into believing Katy Perry attended the Met Gala. The absence of federal regulations on this issue has left tech companies to manage these challenges on their own.