'AI Overviews' Is a Mess, and It Seems Like Google Knows It
No, you actually shouldn't add glue to your pizza.
Credit: Google
At its Google I/O keynote earlier this month, Google made big promises about AI in Search, saying that users would soon be able to “Let Google do the Googling for you.” That feature, called AI Overviews, launched earlier this month. The result? The search giant spent Memorial Day weekend scrubbing AI answers from the web.
Since Google AI search went live for everyone in the U.S. on May 14, AI Overviews have suggested users put glue in their pizza sauce, eat rocks, and use a “squat plug” while exercising (you can guess what that last one is referring to).
While some examples circulating on social media have clearly been photoshopped for a joke, others were confirmed by the Lifehacker team—Google suggested I specifically use Elmer’s glue in my pizza. Unfortunately, if you try to search for these answers now, you’re likely to see the “an AI overview is not available for this search” disclaimer instead.
Why are Google’s AI Overviews like that?
This isn’t the first time Google’s AI searches have led users astray. When the beta for AI Overviews, known as Search Generative Experience, went live in March, users reported that the AI was sending them to sites known to spread malware and spam.
What's causing these issues? Well, for some answers, it seems like Google’s AI can’t take a joke. Specifically, the AI isn’t capable of discerning a sarcastic post from a genuine one, and given it seems to love scanning Reddit for answers. If you’ve ever spent any time on Reddit, you can see what a bad combination that makes.
After some digging, users discovered the source of the AI’s “glue in pizza” advice was an 11-year-old post from a Reddit user who goes by the name “fucksmith.” Similarly, the use of “squat plugs” is an old joke on Reddit’s exercise forums (Lifehacker Senior Health Editor Beth Skwarecki breaks down that particular bit of unintentional misinformation here.)
These are just a few examples of problems with AI Overviews, and another one—the AI's tendency to cite satirical articles from The Onion as gospel (no, geologists actually don't recommend eating one small rock per day) illustrates the problem particularly well: The internet is littered with jokes that would make for extremely bad advice when repeated deadpan, and that's just what AI Overviews is doing.
Google's AI search results do at least explicitly source most of their claims (though discovering the origin of the glue-in-pizza advice took some digging). But unless you click through to read the complete article, you’ll have to take the AI’s word on their accuracy—which can be problematic if these claims are the first thing you see in Search, at the top of the results page and in big bold text. As you’ll notice in Beth’s examples, like with a bad middle school paper, the words “some say” are doing a lot of heavy lifting in these responses.
Is Google pulling back on AI Overviews?
When AI Overviews get something wrong, they are, for the most part, worth a laugh, and nothing more. But when referring to recipes or medical advice, things can get dangerous. Take this outdated advice on how to survive a rattlesnake bite, or these potentially fatal mushroom identification tips that the search engine also served to Beth.
Credit: Beth Skwarecki
Google has attempted to avoid responsibility for any inaccuracies by tagging the end of its AI Overviews with “Generative AI is experimental” (in noticeably smaller text), although it’s unclear if that will hold up in court should anyone get hurt thanks to an AI Overview suggestion.
There are plenty more examples of AI Overview messing up circulating around the internet, from Air Bud being confused for a true story to Barack Obama being referred to as Muslim, but suffice it to say that the first thing you see in Google Search is now even less reliable than it was when all you had to worry about was sponsored ads.
Assuming you even see it: Anecdotally, and perhaps in response to the backlash, AI Overviews currently seem to be far less prominent in search results than they were last week. While writing this article, I tried searching for common advice and facts like “how to make banana pudding” or “name the last three U.S. presidents”—things AI Overviews had confidently answered for me on prior searches without error. For about two dozen queries, I saw no overviews, which struck me as suspicious given the email Google representative Meghann Farnsworth sent to The Verge that indicated the company is “taking swift action” to remove certain offending AI answers.
Google AI Overviews is broken in Search Labs
Perhaps Google is simply showing an abundance of caution, or perhaps the company is paying attention to how popular anti-AI hacks like clicking on Search’s new web filter or appending udm=14 to the end of the search URL have become.
Whatever the case, it does seem like something has changed. In the top-left (on mobile) or top-right (on desktop) corner of Search in your browser, you should now see what looks like a beaker. Click on it, and you’ll be taken to the Search Labs page, where you’ll see a prominent card advertising AI Overviews (if you don’t see the beaker, sign up for Search Labs at the above link). You can click on that card to see a toggle that can be swapped off, but since the toggle doesn’t actually affect search at large, what we care about is what’s underneath it.
Here, you’ll find a demo for AI Overviews with a big bright “Try an example” button that will display a few low-stakes answers that show the feature in its best light. Below that button are three more “try” buttons, except two of them now no longer lead to AI Overviews. I simply saw a normal page of search results when I clicked on them, with the example prompts added to my search bar but not answered by Gemini.
If even Google itself isn’t confident in its hand-picked AI Overview examples, that’s probably a good indication that they are, at the very least, not the first thing users should see when they ask Google a question.
Detractors might say that AI Overviews are simply the logical next step from the knowledge panels the company already uses, where Search directly quotes media without needing to take users to the sourced webpage—but knowledge panels are not without controversy themselves.
Is AI Feeling Lucky?
On May 14, the same day AI Overviews went live, Google Liaison Danny Sullivan proudly declared his advocacy for the web filter, another new feature that debuted alongside AI Overviews, to much less fanfare. The web filter disables both AI and knowledge panels, and is at the heart of the popular udm=14 hack. It turns out some users just want to see the classic ten blue links.
It’s all reminiscent of a debate from a little over a decade ago, when Google drastically reduced the presence of the “I’m feeling lucky” button. The quirky feature worked like a prototype for AI Overviews and knowledge panels, trusting so deeply in the algorithm’s first Google search result being correct that it would simply send users right to it, rather than letting them check the results themselves.
The opportunities for a search to be coopted by malware or misinformation were just as prevalent then, but the real factor behind I’m Feeling Lucky’s death was that nobody used it. Accounting for just 1% of searches, the button just wasn’t worth the millions of dollars in advertising revenue it was losing Google by directing users away from the search results page before they had a chance to see any ads. (You can still use “I’m Feeling Lucky,” but only on desktop, and only if you scroll down past your autocompleted search suggestions.)
It’s unlikely AI Overviews will go the way of I’m Feeling Lucky any time soon—the company has spent a lot of money on AI, and “I’m Feeling Lucky” took until 2010 to die. But at least for now, it seems to have about as much prominence on the site as Google’s most forgotten feature. That users aren’t responding to these AI-generated options raises suggests that you don't really want Google to do the Googling for you.