Google’s latest AI tool, AI Overviews, has come under fire for generating inaccurate and potentially harmful information in its search result summaries. Critics point out bizarre and unsafe advice, prompting concerns over user safety and the credibility of the tool. Amidst the backlash, Google responds with plans to enhance accuracy and reliability through swift updates and improvements.
Google’s latest AI tool, “AI Overviews,” has encountered significant issues since its rollout this month. The tool, designed to provide AI-generated summaries of search results on Chrome, Firefox, and the Google app browser, has been criticized for returning inaccurate and potentially dangerous responses.
Some of the erroneous suggestions include using gasoline to cook spaghetti, eating rocks for health benefits, and adding non-toxic glue to pizza sauce. For example, searching “how to prevent cheese sliding off pizza” results in the suggestion to mix non-toxic glue into the sauce. Other bizarre responses include claims that doctors recommend smoking during pregnancy and that staring at the sun for 5 to 15 minutes is safe.
Internet analyst Jeremiah Johnson and AI expert Toby Walsh have highlighted these issues, calling the situation a “PR disaster” for Google. The technology, based on generative AI, similar to ChatGPT, does not distinguish between credible and non-credible sources, leading to these problematic outputs.
In response, Google stated it is taking swift action to improve the accuracy of AI Overviews. The company asserted that most of the AI responses provide high-quality information and that they are addressing the issue with updates and feedback-driven improvements. The tool is currently being gradually introduced in the US, with plans to make it available to over 1 billion users globally by the year’s end.