Daniel Kokotajlo resigns from OpenAI, criticising the company’s approach to AI safety and calling for industry-wide changes. His departure sheds light on the ongoing debate regarding regulation in the AI industry.

In April, Daniel Kokotajlo resigned from his position as a researcher at OpenAI, the company behind Chat GPT. Kokotajlo, who focused on policy and governance at the organization, parted ways citing disagreements with how OpenAI handles AI safety.

Expressing his concerns on the online forum “LessWrong,” Kokotajlo criticized the company’s culture for prioritizing rapid development over careful consideration of potential risks. Despite the company’s attempt to get him to sign a non-disparagement agreement by threatening his vested equity worth $1.7 million, Kokotajlo chose to retain his right to speak publicly.

Following the exposure of this situation, OpenAI CEO Sam Altman apologized on social media, claiming ignorance of the situation and accepting responsibility. Altman’s apology helped quell the scrutiny on OpenAI.

There have been multiple resignations from OpenAI, with several former employees citing safety concerns about the company’s AI technology. Kokotajlo and others are advocating for companies to adopt a “Right to Warn” pledge, calling for the revocation of non-disparagement agreements, the establishment of anonymous channels for reporting safety concerns, and the fostering of a culture of open criticism.

This situation underscores the continuing debate about the regulation of AI companies, highlighting the potential risks posed by rapid AI development and the perceived lack of oversight in the industry.

Share.

Aiden brings a human perspective to AI stories at AI WEEK. As the Insight Editor, he delves into the ways AI is transforming the human experience, fostering understanding and connection in an increasingly tech-driven world.

Leave A Reply

Exit mobile version