An AI-generated news presenter has been used to deliver a politically charged message against Taiwan’s outgoing president, Tsai Ing-wen, in an attempt to influence the Taiwanese presidential election. This incident is part of a wider trend where AI-generated avatars proliferate on social media for distributing state-backed content, raising concerns about the potential impact on public perception and democratic processes.
An AI-generated news presenter has been used to deliver a politically charged message against Taiwan’s outgoing president, Tsai Ing-wen, aiming to influence the Taiwanese presidential election. The presenter, generated by an unknown source, utilized an extended metaphor linking Tsai to “water spinach” to disparage her tenure.
This AI news anchor is part of a broader trend where AI-generated avatars proliferate on social media, distributing state-backed content. Experts note this practice is expected to grow as the technology becomes more accessible. Tyler Williams from Graphika indicates that these videos do not require perfection to be effective on platforms like TikTok.
China has been actively experimenting with AI-generated news presenters since the introduction of Qiu Hao by the state news agency Xinhua in 2018. While Qiu Hao did not gain much traction, China continues to deploy similar technology for propaganda purposes. A report by Microsoft cited AI-generated disinformation targeting Taiwan’s elections, created using CapCut, a tool from ByteDance, the company behind TikTok.
Other instances of AI-generated anchors include a video produced by Storm 1376, a Chinese state-backed group, and efforts by Iranian state-backed hackers and the Islamic State to use similar technology for propaganda. Ukraine has also experimented with AI in its media, launching an AI spokesperson named Victoria Shi.
Despite the growing sophistication, these AI-generated videos often remain unconvincing due to robotic voices and stiff movements. Experts like Macrina Wang of NewsGuard suggest that while the current AI anchors might not be highly effective in deception, the volume and improvements in AI technology could pose future risks.
Microsoft’s Clint Watts highlights that manipulation of real-life news anchors could emerge as a more effective tactic than creating fully synthetic figures. Although AI-generated news content has yet to achieve significant impact, the evolution of this technology could lead to more potent disinformation tools in the future.