Current and former employees from top tech firms warn about the dangers of unregulated AI, highlighting the potential for ‘human extinction’ with artificial general intelligence (AGI). The open letter calls for stronger protections for whistleblowers and increased public and policymaker engagement amidst growing concerns about AI’s impact.
A group of current and former employees from leading Silicon Valley firms, such as OpenAI, Anthropic, and Google’s DeepMind, has issued an open letter raising alarms about the potential existential risks posed by artificial intelligence (AI). The letter, signed by 13 mostly former employees, highlights concerns that AI technologies, particularly artificial general intelligence (AGI), could lead to “human extinction” if not adequately regulated and controlled.
The letter calls for stronger worker protections to enable employees to freely express concerns about the implications of new AI advancements and to seek broader public and policymaker engagement. One signatory, Neel Nanda from DeepMind, emphasized on social media that while he doesn’t have specific criticisms about his employers, he believes in the necessity of robust whistleblower protections given the potential existential risks of AGI.
The employees’ letter comes in the wake of recent controversies at OpenAI, including a requirement for departing staff to sign non-disparagement agreements in exchange for vested equity, a policy OpenAI later revoked. OpenAI has faced additional scrutiny after actress Scarlett Johansson claimed her voice was used as a model for one of their AI products without consent, an allegation the company denies.
OpenAI maintains it is committed to advancing AI safely and responsibly, citing its internal initiatives for employee feedback and regulation support as part of its efforts to manage AI risks. The firm has also recently faced internal upheavals, including the disbanding of a team focused on long-term AI risks and the resignation of several top researchers.
The open letter underscores a pressing debate within the tech industry about balancing the innovative potential of AI with the need for effective oversight to prevent unintended and potentially catastrophic consequences.