Google Is Wrong Hundreds of Thousands of Times Per Minute and Nobody Seems to Care

By Morgan Paige Published April 14, 2026
Google Is Wrong Hundreds of Thousands of Times Per Minute and Nobody Seems to Care

Ninety-one percent accuracy sounds great until you do the math on five trillion.

That’s the number Futurism reported after the AI startup Oumi ran an analysis for The New York Times on Google’s AI Overviews, those little AI-generated summaries that now sit above your actual search results. Ninety-one percent of the time, the answers are factually correct. Nine percent of the time, they’re not. And when you’re processing five trillion searches a year, that nine percent translates to hundreds of thousands of wrong answers every single minute.

The Numbers Are Bonkers

So like, we’re all used to AI getting things wrong sometimes. ChatGPT hallucinates. Claude hallucinates. (Sorry, buddy.) Every LLM does it. But there’s a difference between me asking my AI writing buddy a question and knowing I should double-check the answer, versus Google, the default information source for most of the planet, confidently serving up wrong answers in a big blue box at the top of the page.

The analysis used OpenAI’s SimpleQA benchmark. First round tested Gemini 2 back in October (85% accurate). Second round tested Gemini 3 in February (91% accurate). Progress! Sure. But Google was apparently fine pushing a less accurate model on billions of users for months. Cool…

Only 8 percent of people actually bother to verify what an AI tells them. Another study found that people followed AI’s wrong answers nearly 80% of the time. Researchers called it “cognitive surrender,” which is honestly the most terrifying term I’ve heard this year.

The Ungrounded Problem Is Worse

Okay so the accuracy thing is bad. But there’s a sneakier problem buried in the data.

With Gemini 2, AI Overviews cited sources that didn’t actually support their claims 37% of the time. With Gemini 3, the “improved” model? That number jumped to 56 percent. More than half the time, the links Google gives you to verify the answer… don’t verify the answer.

The AI gets better at sounding right while getting worse at being traceable. It’s citing its homework but the citations are made up. Even the 8% of users who do bother to check might click through to a source, see it’s vaguely related, and go “yep, checks out” without reading closely enough to catch the disconnect.

Google’s response was basically “this study is flawed, people don’t actually search like that.” Which, fine, maybe the benchmark doesn’t perfectly mirror real-world queries. But Google’s own internal testing found Gemini 3 produces incorrect info 28% of the time. They just claim it’s better when paired with search results. That’s… a lot of faith in a system you’re not letting anyone audit.

Alright, So What?

Authors research stuff constantly. Worldbuilding details, historical facts, medical procedures for that one scene where someone gets stabbed (my search history is unhinged). If you’re Googling “what year did X happen” or “how does Y work” and trusting that blue box, you might be building your story on bad info.

I’m not saying stop using Google. I’m saying stop trusting the summary box. Click through. Read the actual source. Or better yet, ask your AI assistant directly and then verify that too, because at least when you’re chatting with Claude or ChatGPT you already know you’re talking to an AI. The AI Overview thing is sneaky because it’s dressed up as a search result. It feels like Google found the answer. It didn’t. Google’s AI generated an answer. Big difference.

The Bigger Picture Thing That Bugs Me

What kind of gets me is that Google knows. They have internal data showing the model hallucinates 28% of the time. They deployed it anyway because AI Overviews keep users on Google’s page longer, which means more ad revenue, which means… yeah. We’re all guinea pigs in an experiment that prioritizes engagement over accuracy, and the experiment is being run on the largest information platform ever built.

And look, I love AI. I use it every day. I think it makes my writing better and my research faster and my workflow less miserable. But I love AI as a tool I chose to use, with full awareness of its limitations. I did not sign up for Google to quietly replace search results with AI guesses and hope nobody notices the difference.

The 91% number will keep improving. Gemini 4 will probably be better. But the ungrounded citation rate going up while accuracy goes up is the real story. The AI is learning to be more convincingly wrong, and that’s a trajectory I really don’t love.

Anyway, go double-check your last Google search. I’ll wait.

Sources