Lifestyle

The dangers of AI health summaries: What Google’s removal means for South Africans

Yasmine Jacobs|Published

Google has removed some of its AI Overviews after an investigation.

Image: IOL/RON AI

Google has quietly removed some of its artificial intelligence‑generated health summaries from search results after an international investigation found the feature could put users at risk of harm by offering misleading medical information.

The AI‑powered feature, known as Google AI Overviews, is designed to sit at the top of search results with digestible “snapshots” of key information when people ask questions, including about health and medical issues. But a Guardian investigation found that in several cases the summaries were inaccurate and potentially dangerous, giving users false reassurance about serious health matters.

In one example, the AI provided incorrect “normal ranges” for liver blood tests without accounting for crucial factors such as age, sex, ethnicity, or nationality. This is especially worth noting because these variables may affect clinical interpretations, including for South African patients who often come from diverse genetic and environmental backgrounds.

This result could lead people with serious liver disease to wrongly believe their results are healthy and delay critical care.

Experts described the situation as “dangerous” and “alarming”, with concerns that such AI summaries could mislead people during moments of anxiety about their health or when trying to interpret complex clinical tests online.

Google’s response and limitations

In response to the findings, Google confirmed it has removed AI Overviews for specific queries such as “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”.

The company said it routinely makes broad improvements to its systems where context is missing and works under internal policies to improve accuracy, but declined to comment on individual removals.

Google’s statement emphasised that its internal clinical reviewers found many of the examples cited were still supported by “well‑known and reputable sources”.

What does this mean for South Africans?

Although Google’s market share in South Africa, like much of the world, means many people turn to the platform first for health information, there are no local safeguards ensuring AI summaries reflect regional clinical guidelines or population‑specific reference ranges in South African medical practice. This raises questions for doctors and public‑health advocates about the reliability of AI tools for health queries in local contexts.

South African users are urged not to accept Gemini‑powered AI Overviews at face value, especially for serious topics such as health and medical conditions, due to known instances of AI hallucinations and misleading content. 

International health communication experts say that while improved access to information online is valuable, AI systems must clearly signpost evidence‑based and reputable health sources rather than present summaries that could be mistaken for professional medical advice. 

Despite the specific removals, AI Overviews continue to appear for other health‑related searches, including conditions like cancer and mental health.

In light of this, South Africans are advised to cross‑reference AI answers with trusted local health authorities, such as the *National Department of Health, Health Professions Council of South Africa, and reputable hospital networks, before acting on medical information found online.

IOL