Anthropic, a start-up founded by former OpenAI employees, is leading the way in AI safety with the imminent release of Claude 2, its advanced AI chatbot. By prioritising safety and ethical principles through Constitutional AI, the company aims to set a positive industry standard amidst concerns of existential risks and AI arms races.
Anthropic, an AI safety-focused start-up, is on the cusp of releasing Claude 2, its new AI chatbot. With headquarters in San Francisco, the team, comprising just 160 employees, is deeply concerned about the potential existential risks associated with AI. Founded in 2021 by former OpenAI employees, including CEO Dario Amodei and President Daniela Amodei, Anthropic aims to prioritize safety and social responsibility.
Claude 2, like its competitors ChatGPT and Bard, can perform a variety of tasks from writing poems to creating business plans. However, Anthropic’s distinctive safety approach involves a method called Constitutional AI, where a model is trained to follow a set of ethical principles, evaluated and corrected by another AI model.
Despite its significant progress, the company remains vigilant about the misuse of its technology. Concerns extend to fears of AI achieving artificial general intelligence (AGI) and the catastrophic implications thereof.
Anthropic’s foundation is heavily influenced by effective altruism, which emphasizes analyzing data to do the most good. This influence is evident through its employees and funding sources, including significant backing from tech executives and ethical organizations.
While Anthropic faces criticism for seemingly contributing to the AI arms race it warns against, the company maintains that creating advanced models themselves enables better safety research and more secure AI products for current users.
Anthropic’s employees believe that their rigorous focus on safety could set a positive industry standard, even as they acknowledge the challenges and contradictions inherent in their mission.