{"id":26722,"date":"2025-06-24T21:17:57","date_gmt":"2025-06-24T15:32:57","guid":{"rendered":"https:\/\/www.revoscience.com\/en\/?p=26722"},"modified":"2025-06-24T21:19:03","modified_gmt":"2025-06-24T15:34:03","slug":"llms-factor-in-unrelated-information-when-recommending-medical-treatments","status":"publish","type":"post","link":"https:\/\/www.revoscience.com\/en\/llms-factor-in-unrelated-information-when-recommending-medical-treatments\/","title":{"rendered":"LLMs factor in unrelated information when recommending medical treatments"},"content":{"rendered":"\n<p><strong><em>Researchers find nonclinical information in patient messages \u2014 like typos, extra white space, and colorful language \u2014 reduces the accuracy of an AI model.<\/em><\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img data-dominant-color=\"e3e4e6\" data-has-transparency=\"false\" style=\"--dominant-color: #e3e4e6;\" loading=\"lazy\" decoding=\"async\" width=\"900\" height=\"600\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" src=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0.webp\" alt=\"\" class=\"wp-image-26723 not-transparent\" title=\"\" srcset=\"https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0.webp 900w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-675x450.webp 675w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-768x512.webp 768w, https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-150x100.webp 150w\" \/><\/figure>\n\n\n<div class=\"wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">Adam Zewe<\/p><\/div><\/div>\n\n\n<p>CAMBRIDGE, MA \u2013 A large language model (LLM) deployed to make treatment recommendations can be tripped up by nonclinical information in patient messages, like typos, extra white space, missing gender markers, or the use of uncertain, dramatic, and informal language, according to a study by MIT researchers.<\/p>\n\n\n\n<p>They found that making stylistic or grammatical changes to messages increases the likelihood an LLM will recommend that a patient self-manage their reported health condition rather than come in for an appointment, even when that patient should seek medical care.<\/p>\n\n\n\n<p>Their analysis also revealed that these nonclinical variations in text, which mimic how people really communicate, are more likely to change a model\u2019s treatment recommendations for female patients, resulting in a higher percentage of women who were erroneously advised not to seek medical care, according to human doctors.<\/p>\n\n\n\n<p>This work \u201cis strong evidence that models must be audited before use in health care \u2014 which is a setting where they are already in use,\u201d says Marzyeh Ghassemi, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems, and senior author of the study.<\/p>\n\n\n\n<p>These findings indicate that LLMs take nonclinical information into account for clinical decision-making in previously unknown ways. It brings to light the need for more rigorous studies of LLMs before they are deployed for high-stakes applications like making treatment recommendations, the researchers say.<\/p>\n\n\n\n<p>\u201cThese models are often trained and tested on medical exam questions but then used in tasks that are pretty far from that, like evaluating the severity of a clinical case. There is still so much about LLMs that we don\u2019t know,\u201d adds Abinitha Gourabathina, an EECS graduate student and lead author of the study.<\/p>\n\n\n\n<p>They are joined on the&nbsp;<a href=\"https:\/\/link.mediaoutreach.meltwater.com\/ls\/click?upn=u001.aGL2w8mpmadAd46sBDLfbHgK4KaZI07PIKTdR0LwUUBLEGruO4tkbmY-2Biy6-2BFCtgu9hBeUDCs5OSAV-2BUyNLNJp8g58FIf5dFUudxMuhhQI0EdFcLmYuxqRsbBsgOd1oUX2Rlg17FBIlM7k15NaqSCX-2BdjGCBCjVM2yB7serX-2F50-3Daz8t_Gmh-2FjktplCfWo1o-2BFbkY3J9eYBJUJc-2BSUmMkHo42Dqe4Z0qTEKCmSFnQfWCe8-2B8jgXgQQcW-2Fb1rLKfKZRu-2BLLGScwMYc-2FOCX9RDmpXEBR4BY9i7y-2BNgpMuREG7n76alZiQKPn1vn3zOQmaaxHMDMVUfRiEGrx3oG1YVtNUGGZHm77qXWRAeBOWvPgkLMmITRyXXOcNwEB-2Bi5MQjzwS6YrNnlQsGmku80BxV-2Fdb9JiR8Iyc1LbgzzqapxUjffFFSM-2Bwl4qqDrqzNPA3q7bMs3kBCpGKpJc6FRUh9xiSmYj9tK-2FYJhvrMLQSo5aDNkbIpOtGcDy069VJLdCWb1gPIAyVbitrCT2vUcvNmjexjft3vCsXhlcMpW-2FjjW2BrfxyS-2F8DfOT-2FHIHyUOXVM-2FpdFJ3A-3D-3D\" rel=\"noreferrer noopener\" target=\"_blank\">paper<\/a>, which will be presented at the ACM Conference on Fairness, Accountability, and Transparency, by graduate student Eileen Pan and postdoc Walter Gerych.<\/p>\n\n\n\n<p><strong>Mixed messages<\/strong><\/p>\n\n\n\n<p>Large language models like OpenAI\u2019s GPT-4 are being used to&nbsp;<a href=\"https:\/\/link.mediaoutreach.meltwater.com\/ls\/click?upn=u001.aGL2w8mpmadAd46sBDLfbEJPxreBl27MC9ENjKGXDUMDgJpFqoS6PjsmlldyLz44Wn6mjdswN5cxO2V6kVLG-2F9HGVyNGv8NOYcWuaUv9R2orBrwk1YsC6bMIw9Se75-2BjOh8dnnfkwtb-2BlDEofqUl0TrxFar6RChwlbe3A6H3hAFcYW-2F0umrwVS1n4rufkNed-fJC_Gmh-2FjktplCfWo1o-2BFbkY3J9eYBJUJc-2BSUmMkHo42Dqe4Z0qTEKCmSFnQfWCe8-2B8jgXgQQcW-2Fb1rLKfKZRu-2BLLGScwMYc-2FOCX9RDmpXEBR4BY9i7y-2BNgpMuREG7n76alZiQKPn1vn3zOQmaaxHMDMVUfRiEGrx3oG1YVtNUGGZHm77qXWRAeBOWvPgkLMmITRyXXOcNwEB-2Bi5MQjzwS6YrNnlQsGmku80BxV-2Fdb9JiR-2B1VAeoxo8fRisD3OhCGwFr-2FSaL46-2BzLo91qmfgOnJ2mHKoNZ5I49BPqVxkGbz0w2aojF55ev7ZTP4RCZaOXe-2FfG2HulIviouCpiFx7G6RLRcP-2B644yaHHMG68byrMTQwQHz9P-2FFMrHF81BuEx7ML6iPQcJcuSuqjNFblsA7GXXag-3D-3D\" rel=\"noreferrer noopener\" target=\"_blank\">draft clinical notes and triage patient messages<\/a>&nbsp;in health care facilities around the globe, in an effort to streamline some tasks to help overburdened clinicians.<\/p>\n\n\n\n<p>A growing body of work has explored the clinical reasoning capabilities of LLMs, especially from a fairness point of view, but few studies have evaluated how nonclinical information affects a model\u2019s judgment.<\/p>\n\n\n\n<p>Interested in how gender impacts LLM reasoning, Gourabathina ran experiments where she swapped the gender cues in patient notes. She was surprised that formatting errors in the prompts, like extra white space, caused meaningful changes in the LLM responses.<\/p>\n\n\n\n<p>To explore this problem, the researchers designed a study in which they altered the model\u2019s input data by swapping or removing gender markers, adding colorful or uncertain language, or inserting extra space and typos into patient messages.<\/p>\n\n\n\n<p>Each perturbation was designed to mimic text that might be written by someone in a vulnerable patient population, based on psychosocial research into how people communicate with clinicians.<\/p>\n\n\n\n<p>For instance, extra spaces and typos simulate the writing of patients with limited English proficiency or those with less technological aptitude, and the addition of uncertain language represents patients with health anxiety.<\/p>\n\n\n\n<p>\u201cThe medical datasets these models are trained on are usually cleaned and structured, and not a very realistic reflection of the patient population. We wanted to see how these very realistic changes in text could impact downstream use cases,\u201d Gourabathina says.<\/p>\n\n\n\n<p>They used an LLM to create perturbed copies of thousands of patient notes while ensuring the text changes were minimal and preserved all clinical data, such as medication and previous diagnosis. Then they evaluated four LLMs, including the large, commercial model GPT-4 and a smaller LLM built specifically for medical settings.<\/p>\n\n\n\n<p>They prompted each LLM with three questions based on the patient note: Should the patient manage at home, should the patient come in for a clinic visit, and should a medical resource be allocated to the patient, like a lab test.<\/p>\n\n\n\n<p>The researchers compared the LLM recommendations to real clinical responses.<\/p>\n\n\n\n<p><strong>Inconsistent recommendations<\/strong><\/p>\n\n\n\n<p>They saw inconsistencies in treatment recommendations and significant disagreement among the LLMs when they were fed perturbed data. Across the board, the LLMs exhibited a 7 to 9 percent increase in self-management suggestions for all nine types of altered patient messages.<\/p>\n\n\n\n<p>This means LLMs were more likely to recommend that patients not seek medical care when messages contained typos or gender-neutral pronouns, for instance. The use of colorful language, like slang or dramatic expressions, had the biggest impact.<\/p>\n\n\n\n<p>They also found that models made about 7 percent more errors for female patients and were more likely to recommend that female patients self-manage at home, even when the researchers removed all gender cues from the clinical context.<\/p>\n\n\n\n<p>Many of the worst results, like patients told to self-manage when they have a serious medical condition, likely wouldn\u2019t be captured by tests that focus on the models\u2019 overall clinical accuracy.<\/p>\n\n\n\n<p>\u201cIn research, we tend to look at aggregated statistics, but there are a lot of things that are lost in translation. We need to look at the direction in which these errors are occurring \u2014 not recommending visitation when you should is much more harmful than doing the opposite,\u201d Gourabathina says.<\/p>\n\n\n\n<p>The inconsistencies caused by nonclinical language become even more pronounced in conversational settings where an LLM interacts with a patient, which is a common use case for patient-facing chatbots.<\/p>\n\n\n\n<p>But in&nbsp;<a href=\"https:\/\/link.mediaoutreach.meltwater.com\/ls\/click?upn=u001.aGL2w8mpmadAd46sBDLfbJQfXi-2BgjtsRXhSuJl6mKAjB2ltMwAQSpuk-2F9FEo12iQ1xoA_Gmh-2FjktplCfWo1o-2BFbkY3J9eYBJUJc-2BSUmMkHo42Dqe4Z0qTEKCmSFnQfWCe8-2B8jgXgQQcW-2Fb1rLKfKZRu-2BLLGScwMYc-2FOCX9RDmpXEBR4BY9i7y-2BNgpMuREG7n76alZiQKPn1vn3zOQmaaxHMDMVUfRiEGrx3oG1YVtNUGGZHm77qXWRAeBOWvPgkLMmITRyXXOcNwEB-2Bi5MQjzwS6YrNnlQsGmku80BxV-2Fdb9JiR9dCbpa9-2BUllXgCDPu1H4Z6hAZ4clBkkvcKkxTZPYdXqJhbAi-2FhVQyXB7M5iunPB7ZiXqDk3KqyEqyRqFae-2Ft0vGpytY1JGZ4o5AWZ4JJ88cZb-2FUrCPhfC2CXhkRv-2BjHlsOOyYGWgfLmgDcDsDSXt8wxVItUoJxmwkCvAdIZmioVg-3D-3D\" rel=\"noreferrer noopener\" target=\"_blank\">follow-up work<\/a>, the researchers found that these same changes in patient messages don\u2019t affect the accuracy of human clinicians.<\/p>\n\n\n\n<p>\u201cIn our follow up work under review, we further find that large language models are fragile to changes that human clinicians are not,\u201d Ghassemi says. \u201cThis is perhaps unsurprising \u2014 LLMs were not designed to prioritize patient medical care. LLMs are flexible and performant enough on average that we might think this is a good use case. But we don\u2019t want to optimize a health care system that only works well for patients in specific groups.\u201d<\/p>\n\n\n\n<p>The researchers want to expand on this work by designing natural language perturbations that capture other vulnerable populations and better mimic real messages. They also want to explore how LLMs infer gender from clinical text.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers find nonclinical information in patient messages \u2014 like typos, extra white space, and colorful language \u2014 reduces the accuracy of an AI model.<\/p>\n","protected":false},"author":2,"featured_media":26723,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[43],"tags":[],"class_list":["post-26722","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-computer-science"],"featured_image_urls":{"full":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0.webp",900,600,false],"thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-200x200.webp",200,200,true],"medium":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-675x450.webp",675,450,true],"medium_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-768x512.webp",750,500,true],"large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0.webp",750,500,false],"1536x1536":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0.webp",900,600,false],"2048x2048":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0.webp",900,600,false],"ultp_layout_landscape_large":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0.webp",900,600,false],"ultp_layout_landscape":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-870x570.webp",870,570,true],"ultp_layout_portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-600x600.webp",600,600,true],"ultp_layout_square":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-600x600.webp",600,600,true],"newspaper-x-single-post":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-760x490.webp",760,490,true],"newspaper-x-recent-post-big":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-550x360.webp",550,360,true],"newspaper-x-recent-post-list-image":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-95x65.webp",95,65,true],"web-stories-poster-portrait":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-640x600.webp",640,600,true],"web-stories-publisher-logo":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-96x96.webp",96,96,true],"web-stories-thumbnail":["https:\/\/www.revoscience.com\/en\/wp-content\/uploads\/2025\/06\/MIT_Medium-Message-01-press_0-150x100.webp",150,100,true]},"author_info":{"info":["Adam Zewe"]},"category_info":"<a href=\"https:\/\/www.revoscience.com\/en\/category\/computer-science\/\" rel=\"category tag\">Computer Science<\/a>","tag_info":"Computer Science","comment_count":"0","_links":{"self":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/26722","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/comments?post=26722"}],"version-history":[{"count":1,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/26722\/revisions"}],"predecessor-version":[{"id":26724,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/posts\/26722\/revisions\/26724"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media\/26723"}],"wp:attachment":[{"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/media?parent=26722"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/categories?post=26722"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.revoscience.com\/en\/wp-json\/wp\/v2\/tags?post=26722"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}