The rise of AI-generated deepfakes threatens public discourse, leading to calls for new laws and regulations to combat misinformation on social media platforms amidst major events like the COVID-19 pandemic and U.S. presidential elections.
In the contemporary landscape of generative artificial intelligence, the public is increasingly vulnerable to disinformation and misinformation on social media platforms. This is particularly pronounced during major events, as evidenced by the COVID-19 pandemic and the 2016 and 2020 U.S. presidential elections.
AI-generated deepfakes amplify the threat, complicating efforts to tackle false information. Experts suggest that new laws or enhanced self-regulation by tech companies are necessary to combat this issue. Barbara McQuade, a former U.S. attorney and University of Michigan law professor, advocates for federal legislation, noting the novelty of the technology.
A recent assessment by the U.S. Department of Homeland Security warns that AI poses a significant threat to the 2024 presidential election by enabling interference in emergent events and election processes. However, social media companies are currently shielded from civil liability under Section 230 of the Communications Decency Act of 1996. This has fueled debates about the need to update or amend the law.
Legal experts like Gautam Hans, a law professor at Cornell University, highlight First Amendment concerns and the challenges in crafting effective legislation against disinformation. McQuade and Hans both suggest that tech companies’ private practices might offer better long-term solutions than legislative measures.
Social media giants such as Meta, TikTok, and X (formerly Twitter) have their own misinformation policies. These include removing harmful content, requiring disclosures of AI-generated material, and labeling misleading media. Despite these measures, incidents such as the viral spread of AI-generated images of Taylor Swift on X indicate that existing policies may not always be sufficient.
The debate over Section 230 continues, with Republicans viewing it as a shield for Big Tech censorship and Democrats arguing it lacks provisions to combat hate speech. The U.S. Supreme Court’s recent ruling in favor of Twitter and Google in lawsuits related to terrorist attacks has kept the debate unresolved. Both sides agree on the demand for greater accountability but differ on how to achieve it.