When LLMs (large language models) use stigmatizing language, it can make patients feel judged, leading to decreased trust in clinicians.

Stigmatizing Language in Large Language Models for Alcohol and Substance Use Disorders: A Multimodel Evaluation and Prompt Engineering Approach
Go to source). However, the study also reveals that using targeted prompts can significantly decrease the use of stigmatizing language in LLM responses, offering a potential solution to this issue.
TOP INSIGHT
Prompting matters. Without it, 35.4% of LLM responses had stigmatizing language-just 6.3% with prompt engineering. A game-changer for AI in healthcare. #MedIndia #LLMsInHealthcare #StigmaReduction #AIforHealth
Patient-Centered Communication: A Key to Better Health Outcomes
“Using patient-centered language can build trust and improve patient engagement and outcomes. It tells patients we care about them and want to help,” said corresponding author Wei Zhang, MD, PhD, an assistant professor of Medicine in the Division of Gastroenterology at Mass General Hospital, a founding member of the Mass General Brigham healthcare system.“Stigmatizing language, even through LLMs, may make patients feel judged and could cause a loss of trust in clinicians.”
LLM responses are generated from everyday language, which often includes biased or harmful language towards patients.
Prompt engineering is a process of strategically crafting input instructions to guide model outputs towards non-stigmatizing language and can be used to train LLMs to employ more inclusive language for patients.
This study showed that employing prompt engineering within LLMs reduced the likelihood of stigmatizing language by 88%.
Their results indicated that 35.4% of responses from LLMs without prompt engineering contained stigmatizing language, in comparison to 6.3% of LLMs with prompt engineering.
The effect was seen across all 14 models tested, although some models were more likely than others to use stigmatizing terms.
AI Models Less Stigmatizing with Prompt Engineering
Future directions include developing chatbots that avoid stigmatizing language to improve patient engagement and outcomes.The authors advise clinicians to proofread LLM-generated content to avoid stigmatizing language before using it in patient interactions and to offer alternative, patient-centered language options.
The authors note that future research should involve patients and family members with lived experience to refine definitions and lexicons of stigmatizing language, ensuring LLM outputs align with the needs of those most affected.
This study reinforces the need to prioritize language in patient care as LLMs become increasingly used in healthcare communication.
Reference:
- Stigmatizing Language in Large Language Models for Alcohol and Substance Use Disorders: A Multimodel Evaluation and Prompt Engineering Approach - (https://pubmed.ncbi.nlm.nih.gov/40705021/)
Source-Eurekalert
MEDINDIA



Email





