Google AI’s health answers under fire for citing YouTube over medical sources
Google AI is under scrutiny for citing YouTube over medical sources in health searches, raising serious concerns about misinformation and public safety.
Google AI’s health answers under fire for citing YouTube over medical sources

A growing controversy surrounds Google’s AI Overviews after investigations revealed that the tool frequently relies on YouTube videos—rather than credible medical sources—when answering health and medical queries. Following reports of inaccurate and potentially dangerous advice, experts are warning that AI-driven search results could pose serious public health risks if left unchecked.
Rewritten News Story
In January 2026, an investigation by The Guardian uncovered a troubling flaw in Google’s AI-powered search summaries. The probe revealed that AI Overviews were presenting misleading medical information, including incorrect interpretations of liver function tests—errors serious enough to potentially delay diagnosis and treatment for critical illnesses.
The findings prompted Google to temporarily remove AI Overviews for several sensitive health-related search queries, such as normal ranges for liver blood tests. While this move was intended to reduce harm, a separate study soon raised fresh alarms: Google’s AI was increasingly citing YouTube videos instead of trusted medical websites when responding to health queries.
A study conducted by SEO platform SE Ranking analysed more than 50,000 health-related searches in Germany. It found that YouTube accounted for 4.43% of AI citations—making it the most referenced source. This was significantly higher than established medical portals like netdoktor.de and even authoritative references such as the MSD Manuals. Alarmingly, only 34.45% of AI citations came from reliable medical sources, while government health institutions and academic journals contributed just about 1%.
The concern lies in YouTube’s nature as a general-purpose platform. While certified doctors and hospitals do share content there, so do influencers and creators without medical training. Researchers highlighted several dangerous examples, including AI advice that wrongly suggested pancreatic cancer patients avoid high-fat foods—guidance that contradicts medical recommendations and could worsen patient outcomes. Similar inaccuracies were found in AI responses related to women’s cancer screenings.
The issue extends beyond Google Search. With AI chatbots increasingly acting as digital doctors, reliance on tools like OpenAI’s ChatGPT is surging. OpenAI estimates that nearly 40 million people globally seek healthcare advice from ChatGPT daily. In Canada, a 2026 survey by the Canadian Medical Association found that nearly half of respondents consult Google AI summaries or chatbots for medical concerns.
However, studies suggest this trust may be misplaced. Research from University of Waterloo and Harvard University showed that AI systems often provide incorrect or overly agreeable responses, failing to challenge flawed assumptions. This tendency, experts warn, creates a “confident authority” problem—where AI delivers wrong information with unwarranted certainty.
While AI tools offer quick answers in an overstretched healthcare system, medical professionals stress they should never replace qualified doctors. As AI continues to shape how people seek health information, the growing dependence on non-medical sources raises urgent questions about accountability, accuracy, and public safety.

