Study warns AI medical advice could pose “dangerous” health risks

7

A major new study from the University of Oxford has found that using AI chatbots for medical advice may pose health risks because they often give inaccurate and inconsistent guidance.

The research, published on 9 February 2026, was carried out by the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences.

READ ALSO: New study reveals couples with joint bank accounts last longer

It involved nearly 1,300 participants in the United Kingdom who were asked to use AI tools to assess medical scenarios. 

Participants were presented with 10 medical cases written by doctors, ranging from common illnesses to potentially serious conditions.

READ ALSO: Scientists reveal earth could be struck by an asteroid sooner than expected

They were then asked to identify likely conditions and decide what the next step should be — such as visiting a doctor or seeking emergency care. 

The study tested how large language models (LLMs) performed when helping users make real health decisions.

READ ALSO: Scientists reveal a man should ejaculate 21 times per month to help prevent prostate cancer

It found that people using AI tools were no more accurate in identifying conditions or choosing an appropriate course of action than those who relied on traditional sources like internet searches or their own judgement. 

One major concern was that users often didn’t know what information to provide to the chatbots, and the responses they received sometimes mixed accurate and misleading advice, making it difficult to interpret what was correct.

READ ALSO: Scientists discover cockroach milk is four times more nutritious than cow’s milk

Lead researchers said the study highlights a gap between how AI performs on medical exams and how it actually helps people in real-world situations.

They stressed that chatbots should not replace trained healthcare professionals and that patients might be put at risk if they rely too heavily on this technology for medical decisions.

READ ALSO: Unemployed man rejects job offer because his boss is a woman

The study comes as companies like OpenAI and Anthropic have released versions of their chatbots aimed at health-related use, but Oxford researchers warn stronger safety checks and clearer limits are needed before such tools are widely used for medical guidance.


Discover more from News On Cbgist

Subscribe to get the latest posts sent to your email.