It looks like Google’s latest attempt to make people’s lives easier with artificial intelligence (AI) is backfiring.
The tech giant’s new tool, ‘AI Overviews’, gives users AI-powered summaries of search results in Chrome, Firefox and the Google app browser.
But since its introduction this month, people have started noticing that it produces incorrect statements and suggestions – many of which are dangerous.
Among them are claims that you can ‘use gasoline to make a spicy spaghetti dish’, eat rocks and spread glue on your pizza.
In response to the search query “cheese doesn’t stick to pizza,” Google suggests adding “non-toxic glue” to the sauce to make it more sticky.
According to The Verge, this answer originally came from a joke made on Reddit over a decade ago.
Another user who searched for “How Many Rocks Should I Eat” got a response taken from a 2021 article by satirical site The Onion.
AI Overviews says, “According to geologists at UC Berkeley, you should eat at least one small pebble a day.”
It continued: ‘They say stones are an essential source of minerals and vitamins important for digestion.’
The Google tool also claims that a dog played in the NBA, astronauts met cats on the moon and former US President James Madison graduated from the University of Wisconsin 21 times.
On .
Toby Walsh, professor of AI at the University of New South Wales in Sydney, said UNSW Sydney called it a “PR disaster for the search giant”.
As Professor Walsh explains, AI Overviews is a type of ‘generative AI’ – the same technology that powers rival product ChatGPT – to provide summaries of search results based on data across the web.
But generative AI tools don’t know what’s true and what’s not – only what’s popular (e.g. the Onion article about eating rocks).
‘Ask you “how to keep bananas fresh longer” and it uses AI to generate an actionable summary of tips, such as keeping bananas in a cool, dark place and away from other fruits like apples, the academic wrote in The Conversation.
Click here to change the format of this module
“But if you ask a left-wing question, the consequences can be disastrous or even dangerous.”
In an official statement, Google said it is “taking swift action where necessary” to make the tool’s responses more accurate.
“The vast majority of AI overviews provide high-quality information, with links to dig deeper into the web,” a spokesperson said.
“Many of the examples we saw were unusual queries, and we also saw examples that had been modified or that we couldn’t reproduce.
“We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback.
‘We are taking swift action where necessary under our content policy and are using these examples to develop wider improvements to our systems, some of which have already been rolled out.’
AI Overviews rolled out first to people in the US, although more than 1 billion people worldwide will have access to it by the end of the year, Google hopes.
Announcing the feature in a blog post on May 14, the company said it gives users quick answers and helps those who need information quickly.
Like other tech companies, Google has turned its focus to AI since the success of ChatGPT.
Last year, Google launched its own AI, Gemini, as a rival to ChatGPT, but the chatbot was plagued with problems.
This culminated in Google pausing the Gemini AI after it was accused of replacing white historical figures, including Nazi soldiers, with people of color.