Frustration Mounts Over False Results In Google’s ‘AI Overviews’

Frustration with Google’s AI Overviews feature is growing amongst users and web publishers, over incorrect results and fewer links to web content.

Users have begun sharing ways to disable the feature, which was rolled out to all users in the US following Google’s I/O developer conference earlier this month.

“I’m finding the results very repetitive, often wrong, and they don’t at all match what I’m looking for but they take up so much space and feel in the way. I just want them to go away,” one user wrote on Google’s support forums.

Users provided examples such as the AI advising users to add “non-toxic glue” to cheese to make it stick to pizza, a result apparently drawn from a post on Reddit that was intended as a joke.

Image credit: Google

AI Overviews

Another result said geologists recommended humans to eat at least one rock per day, something drawn from an article on satirical website The Onion.

Google told Silicon UK that such results are “isolated examples”.

The company doesn’t provide a way to turn off AI Overviews, which push conventional results further down the page, but the feature can be bypassed using browser plug-ins or by manually redirecting searches to Google’s stripped down “web” search option.

The feature is powered by Google’s generative AI, a technology that leapt into the spotlight in late 2022 with the introduction of OpenAI’s ChatGPT, and Google immediately identified it as a potential existential threat to its core search product.

As a result Google is highlighting its own generative AI tools to prevent users from going to alternatives from OpenAI, Microsoft or others.

Image credit: Google

‘Forseeable effect on society’

It added AI Overviews to searches in the US and the UK earlier this year as an opt-in feature, and is now planning to roll out the tool to all geographical markets.

Industry experts said it was troubling that generative AI was being given such a prominent place in Google’s results, given that it is known to “hallucinate”, meaning that it routinely creates false information similar to what it has found online.

“Look, this isn’t about ‘gotchas’, this is about pointing out clearly foreseeable harms. Before–eg–a child dies from this mess,” wrote former Google AI ethics researcher Margaret Mitchell.

“This isn’t about Google, it’s about the foreseeable effect of AI on society.”

Web publishers are also concerned, with Gartner forecasting a 25 percent decline in search engine traffic volume by 2026 due to the use of generative AI.

Image credit: Google

Search traffic

As a result some news publishers, such as News Corp, are striking deals with AI companies, while others, such as the New York Times, are suing OpenAI and Microsoft, with the argument that the training and operation of ChatGPT violates copyright law.

Google said the incorrect AI summaries highlighted were “generally very uncommon queries, and aren’t representative of most people’s experiences”.

“The vast majority of AI overviews provide high quality information, with links to dig deeper on the web,” the company said.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Apple Briefly Overtakes Microsoft For Market Crown On AI Plans

Apple AI announcements triggers three-day rally that sees market value briefly overtake Microsoft for most…

10 hours ago

Musk’s X Lawsuit Against Nazi Report Author Slated For 2025 Trial

Trial set for April 2025 against Media Matters, after its report prompted an advertising exodus…

1 day ago

Elon Musk Wins Shareholder Vote On Pay, Texas Incorporation

Shareholders at Tesla vote to reinstate Elon Musk's 'ridiculous' $56bn pay package, and approve incorporation…

1 day ago

X (Twitter) Now Hides Posts Liked By Users

Elon Musk’s X platform (formerly Twitter) has this week begun hiding user likes, amid reports…

2 days ago