Don't ask AI chatbots for medical advice, study warns

Don't ask AI chatbots for medical advice, study warns
Source: Newsweek

Trust your doctor, not a chatbot. That's the sobering conclusion of a new study published in the journal Annals of Internal Medicine, which reveals how artificial intelligence (AI) is vulnerable to being misused to spread dangerous misinformation on health.

Researchers experimented with five leading AI models developed by Anthropic, Google, Meta, OpenAI and X Corp. All five systems are widely used, forming the backbone of the AI-powered chatbots embedded in websites and apps around the world.

Using developer tools not typically accessible to the public, the researchers found that they could easily program instances of the AI systems to respond to health-related questions with incorrect -- and potentially harmful -- information.

Worse, the chatbots were found to wrap their false answers in convincing trappings.

"In total, 88 percent of all responses were false," explained paper author Natansh Modi of the University of South Africa in a statement.
"And yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate."

Among the false claims made were debunked myths such as that vaccines cause autism, that HIV is an airborne disease and that 5G causes infertility.

Of the five chatbots evaluated, four presented responses that were 100 percent incorrect. Only one model showed some resistance, generating disinformation in 40 percent of cases.

The research didn't stop at theoretical vulnerabilities; Modi and his team went a step further, using OpenAI's GPT Store -- a platform that allows users to build and share customized ChatGPT apps -- to test how easily members of the public could create disinformation tools themselves.

"We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation," said Modi.

He emphasized: "Our study is the first to systematically demonstrate that leading AI systems can be converted into disinformation chatbots using developers' tools, but also tools available to the public."

According to the researchers, the threat posed by manipulated AI chatbots is not hypothetical -- it is real and happening now.

"Artificial intelligence is now deeply embedded in the way health information is accessed and delivered," said Modi.
"Millions of people are turning to AI tools for guidance on health-related questions.
"If these systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before."

Previous studies have already shown that generative AI can be misused to mass-produce health misinformation -- such as misleading blogs or social media posts -- on topics ranging from antibiotics and fad diets to homeopathy and vaccines.

What sets this new research apart is that it is the first to show how foundational AI systems can be deliberately reprogrammed to act as disinformation engines in real time, responding to everyday users with false claims under the guise of credible advice.

The researchers found that even when the prompts were not explicitly harmful, the chatbots could "self-generate harmful falsehoods."

While one model -- Anthropic's Claude 3.5 Sonnet -- showed some resilience by refusing to answer 60 percent of the misleading queries, researchers say this is not enough. The protections across systems were inconsistent and, in most cases, easy to bypass.

"Some models showed partial resistance, which proves the point that effective safeguards are technically achievable," Modi noted.
"However, the current protections are inconsistent and insufficient. Developers, regulators and public health stakeholders must act decisively, and they must act now."

If left unchecked, the misuse of AI in health contexts could have devastating consequences: misleading patients, undermining doctors, fueling vaccine hesitancy and worsening public health outcomes.

The study's authors call for sweeping reforms -- including stronger technical filters, better transparency about how AI models are trained, fact-checking mechanisms and policy frameworks to hold developers accountable.

They draw comparisons with how false information spreads on social media, warning that disinformation spreads up to six times faster than the truth and that AI systems could supercharge that trend.

"Without immediate action," Modi said, "these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns."

Newsweek has contacted Anthropic, Google, Meta, OpenAI and X Corp for comment.

Modi, N. D., Menz, B. D., Awaty, A. A., Alex, C. A., Logan, J. M., McKinnon, R. A., Rowland,A., Bacchi,S., Gradon,K., Sorich,M.J.& Hopkins,A.M.(2024).Assessingthesystem-instructionvulnerabilitiesoflargelanguagemodelstomaliciousconversionintohealthdisinformationchatbots.AnnalsofInternalMedicine.https://doi.org/10.7326/M24-1054