Google’s AI Overviews Under Fire for Providing Misleading Health Information
The Guardian recently published an investigation claiming that health experts found inaccurate or misleading guidance in some AI Overview responses for medical queries. Google disputes the reporting, stating that many examples were based on incomplete screenshots. The investigation tested health-related searches and shared AI Overview responses with charities, medical experts, and patient information groups. Google maintains that the “vast majority” of AI Overviews are factual and helpful.
What The Guardian Reported Finding
The Guardian tested a range of health queries and asked health organizations to review the AI-generated summaries. Several reviewers said the summaries included misleading or incorrect guidance. For instance, one example involved pancreatic cancer, where advising patients to avoid high-fat foods was deemed “completely incorrect” by Anna Jewell, director of support, research, and influencing at Pancreatic Cancer UK. She emphasized that following such guidance “could be really dangerous and jeopardize a person’s chances of being well enough to have treatment.”
The reporting also highlighted mental health queries, with Stephen Buckley, head of information at Mind, stating that some AI summaries for conditions such as psychosis and eating disorders offered “very dangerous advice” and were “incorrect, harmful, or could lead people to avoid seeking help.” Furthermore, Athena Lamnisos, chief executive of the Eve Appeal cancer charity, pointed out that a pap test being listed as a test for vaginal cancer was “completely wrong information.” Sophie Randall, director of the Patient Information Forum, noted that the examples showed “Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health.”
Google’s Response
Google disputed both the examples and the conclusions. A spokesperson told The Guardian that many of the health examples shared were “incomplete screenshots,” but from what the company could assess, they linked “to well-known, reputable sources and recommend seeking out expert advice.” Google emphasized that the “vast majority” of AI Overviews are “factual and helpful” and that it “continuously” makes quality improvements. The company also argued that AI Overviews’ accuracy is “on a par” with other Search features, including featured snippets.
The Broader Accuracy Context
This investigation comes amidst a debate that has been ongoing since AI Overviews expanded in 2024. Initially, AI Overviews drew attention for bizarre results, including suggestions involving glue on pizza and eating rocks. Google later announced that it would reduce the scope of queries that trigger AI-written summaries and refine how the feature works. More recently, data from Ahrefs suggests that medical YMYL queries are more likely than average to trigger AI Overviews. In its analysis of 146 million SERPs, Ahrefs reported that 44.1% of medical YMYL queries triggered an AI Overview, which is more than double the overall baseline rate in the dataset.
Why This Matters
AI Overviews appear above ranked results, and when the topic is health, errors carry more weight. Publishers have invested years in documented medical expertise to meet high standards. This investigation puts the same spotlight on Google’s own summaries when they appear at the top of results. The Guardian’s reporting also highlights a practical problem: the same query can produce different summaries at different times, making it harder to verify what you saw by running the search again.
Looking Ahead
Google has previously adjusted AI Overviews after viral criticism. Its response to The Guardian indicates that it expects AI Overviews to be judged like other Search features, not held to a separate standard. As the debate continues, it is essential to consider the implications of AI Overviews on health information and the measures that can be taken to ensure the accuracy and reliability of these summaries.
Conclusion
The investigation by The Guardian raises important concerns about the accuracy of Google’s AI Overviews, particularly when it comes to health-related information. While Google disputes the findings, the issue highlights the need for ongoing evaluation and improvement of AI-generated content. As AI Overviews continue to evolve, it is crucial to prioritize accuracy, transparency, and accountability to ensure that users receive reliable and trustworthy information, especially when it comes to sensitive topics like health.

