
Tuesday 10/February/2026 – 11:00 am
A recent study revealed that systems artificial intelligence You’re more likely to give incorrect medical advice when the misinformation comes from seemingly reliable sources, such as doctors’ notes or hospital documents, compared to content circulating on social media.
Medical misinformation fools artificial intelligence
According to what was published by Reuters news agency, the study published in the Lancet Digital Health journal showed that researchers subjected 20 large linguistic models of open and closed artificial intelligence models to multiple tests, and it was found that these models were more likely to believe errors contained in realistic medical discharge notes compared to health myths spread across social platforms.
Dr. Eyal Klang of the Icahn School of Medicine at Mount Sinai in New York, one of the study’s leaders, said that current AI systems tend to treat confident medical language as hypothetically true, even when it is clearly false, adding: For these models, the problem is not so much the validity of the claim as how it is formulated.
Accuracy of artificial intelligence in the medical field
The researchers explained that the accuracy of artificial intelligence represents a particular challenge in the medical field, at a time when the number of health applications that claim to use artificial intelligence to help patients is increasing, in addition to doctors relying on it for multiple tasks, such as medical transcription and data analysis.
During the study, the models were shown three types of content: real hospital discharge notes that included a fabricated medical recommendation, common health myths collected from the Reddit platform, in addition to 300 short clinical scenarios written by doctors. After analyzing the models’ responses to more than a million questions and instructions, it was found that the artificial intelligence treated the fabricated information as true in about 32% of the cases.
But the percentage rose to nearly 47% when the misinformation came in the form of an official medical note from a health care provider, said Dr. Girish Nadkarni, chief artificial intelligence officer at Mount Sinai Health System and co-leader of the study. In contrast, the AI showed greater caution towards social media, with the rate of passing on misinformation reduced to 9% when it came from a Reddit post.
Artificial intelligence behavior in drafting
The study also showed that the wording and tone of questions greatly affect the behavior of artificial intelligence. Forms were more likely to accept false information when they were worded in a formal and expert style, such as indicating that the questioner was a senior doctor.
The results indicated that OpenAI’s GPT models were the least prone to errors and the most capable of detecting misleading information, while other models showed an error proneness rate of up to 63.6%.
Nadkarni said that artificial intelligence has great potential to support doctors and patients by providing faster insights, but stressed the need to include internal controls that verify the validity of medical claims before presenting them as facts, explaining that the study reveals weaknesses and suggests ways to enhance the reliability of these systems before they are widely integrated into healthcare.
In a related context, a recent study published in Nature Medicine pointed out that relying on artificial intelligence to inquire about medical symptoms did not outperform traditional online research in helping patients make sound health decisions.








