
AI-generated health summaries removed after reports that users’ health could be at risk from misleading information
Google removed some of its artificial intelligence (AI) summaries after report that people were being put at risk of harm by false and misleading information.
A recent investigation found that some of the AI summaries, which appear at the top of search results, served up inaccurate health information, putting users at risk of harm.
The findings followed an investigation by The Guardian that Google provided bogus information about crucial liver function tests that could leave people with serious liver disease wrongly thinking they were healthy.
The report found that typing “what is the normal range for liver blood tests” served up masses of numbers, little context and no accounting for nationality, sex, ethnicity or age of patients.
Healthcare experts quoted in the article said the findings were “dangerous and alarming,” as what Google’s AI Overviews said was normal could vary drastically from what was actually considered normal. They warned that the summaries could lead to seriously ill patients wrongly thinking they had a normal test result, and then decide not to attend follow-up healthcare meetings.
Following the report, Google has removed AI Overviews for the highlighted search terms “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”.
A Google spokesperson said: “We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”
Vanessa Hebditch, Director of Communications and Policy at the British Liver Trust, said: “This is excellent news, and we’re pleased to see the removal of the Google AI Overviews in these instances.
“However, if the question is asked in a different way, a potentially misleading AI Overview may still be given, and we remain concerned other AI produced health information can be inaccurate and confusing.”
She added: “A liver function test or LFT is a collection of different blood tests. Understanding the results and what to do next is complex and involves a lot more than comparing a set of numbers.
“But the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test.
“In addition, the AI Overviews fail to warn that someone can get normal results for these tests when they have serious liver disease and need further medical care. This false reassurance could be very harmful.
“Our bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it’s not tackling the bigger issue of AI Overviews for health.”

Sue Farrington, Chair of the Patient Information Forum, of which DRWF is a member, said: “This is a good result, but it is only the very first step in what is needed to maintain trust in Google’s health-related search results. There are still too many examples out there of Google AI Overviews giving people inaccurate health information.”
“It is so important that Google signposts people to robust, researched health information and offers of care from trusted health organisations.”
Sophie Randall, Director of the Patient Information Forum, said the highlighted examples showed “Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health”.
The Guardian added that AI Overviews were still available for other highlighted areas, including summaries for information about cancer and mental health that experts described as “completely wrong” and “really dangerous”.
The news comes as research by Confused.com Life Insurance found that three in ten people in Britain were now self-diagnosing health issues with AI, with the same amount likely to try it in future. The research found people are turning to tools like ChatGPT to expand their health knowledge and self-diagnose, and highlights their top queries and whether the responses helped.
In addition, a report, co-authored by The Nuffield Trust and the Royal College of General Practitioners, looked at the number of GPs in the UK using AI and what they are using the tools for. The report found more than one in four GPs reported using AI tools in their clinical practice; and the majority of GPs are using AI tools for clinical documentation and note taking.
Of the GPs who said they worked in more deprived areas, just over one in four said they used AI tools, compared to 1 in 3 of GPs who said they worked in more affluent areas.
Nuffield Trust director of research and policy and practising GP Dr Becks Fisher said: "The government is pinning its hopes on the potential of AI to transform the NHS but there is a huge chasm between policy ambitions and the current disorganised reality of how AI is being rolled out and used in general practice."
DRWF is committed to making sure all its information is reliable, evidence-based and accessible.
To make sure its health information meets the highest possible standards, it has signed up for independent assessment by the PIF TICK – the UK-quality mark for health and care information.
I would like to make a regular donation of
I would like to make a single donation of
There are lots of ways to raise money to support
people living with all forms of diabetes.
Bake, Swim, Cycle, Fly ... Do It For DRWF!
Fundraise with us
Recent News