Artificial intelligence start-up Haize Labs, founded by Leonard Tang, Steve Li, and Richard Liu, reveals concerning findings about safety issues in widely-used AI tools like Pika, ChatGPT, Dall-E, and others. The start-up aims to set safety standards for AI models in the industry.
Haize Labs, an artificial intelligence start-up, has discovered thousands of vulnerabilities in various popular generative AI programs. Founded by Leonard Tang, Steve Li, and Richard Liu, Haize Labs analyzed tools like the video creator Pika, text generator ChatGPT, image generator Dall-E, and a code-generating AI system.
During their tests, Haize Labs found that these AI tools could produce violent or sexualized content, instruct users on creating chemical and biological weapons, and facilitate automated cyberattacks. The start-up positions itself as an “independent third-party stress tester” aiming to establish safety ratings for AI models, similar to what Moody’s does for bond ratings.
As more companies incorporate generative AI into their products, safety concerns are growing. Previously, Google and Air Canada faced criticism for the harmful suggestions provided by their AI tools. Jack Clark, co-founder of AI research firm Anthropic, emphasized the need for independent organizations to test AI capabilities and safety issues. Graham Neubig, Associate Professor of Computer Science at Carnegie Mellon University, highlighted the importance of impartial third-party safety tools.
Haize Labs is making its findings public on GitHub and has alerted the AI tool developers about the vulnerabilities. The start-up is partnering with Anthropic to stress test an unreleased AI product. Tang noted the necessity of automated systems to identify AI vulnerabilities efficiently, as manual methods can be time-consuming and expose moderators to harmful content.