Google has removed its AI Overviews feature from certain medical and health-related search queries following growing concerns that the tool was providing misleading or inaccurate information to users.
The decision comes after an investigation by The Guardian, which found that Google’s AI-generated summaries, designed to give quick answers at the top of search results, had in some cases delivered incorrect or potentially harmful health advice. The findings reignited global debate over the risks of deploying generative AI systems in sensitive areas such as medicine and public health.
Google confirmed that it has begun restricting AI Overviews for specific categories of health queries while it reviews the system’s performance. The company said it is applying “additional safeguards” to ensure that users searching for medical information are directed to reliable, authoritative sources instead of AI-generated summaries that could be misinterpreted.

AI Overviews, which were expanded globally in 2025, use large language models to summarise information from across the web. While the feature was promoted as a way to simplify complex topics, health experts have repeatedly warned that AI systems can “hallucinate”, confidently presenting false or oversimplified information, particularly when dealing with nuanced medical conditions, symptoms, or treatments.
The Guardian’s investigation highlighted examples where AI Overviews appeared to blur the line between verified medical guidance and unproven claims, raising fears that users might rely on AI summaries instead of consulting qualified healthcare professionals.
In response, Google said health and safety remain a priority, noting that Search already includes strict policies for medical content and that the company works closely with clinicians and public health experts. The latest move suggests Google is taking a more cautious approach as regulators and the public scrutinise how AI tools influence access to health information.

The rollback also reflects broader pressure on Big Tech firms to ensure AI systems meet higher standards of accuracy and accountability. Governments in Europe, the United States and parts of Africa have increasingly warned that unchecked AI use in health and finance could pose real-world risks.
For now, Google says AI Overviews will remain active for non-sensitive topics, while medical searches will rely more heavily on traditional links from trusted institutions such as hospitals, research bodies and public health agencies.