MEDINDIA

Search Medindia

The Power of Words: LLMs' Influence on Substance Use Disorder Stigma

by Dr. Sakshi Singh on Jul 31 2025 10:22 AM
Listen to this article
0:00/0:00

When LLMs (large language models) use stigmatizing language, it can make patients feel judged, leading to decreased trust in clinicians.

The Power of Words: LLMs` Influence on Substance Use Disorder Stigma
As AI becomes more prevalent in healthcare communication, a new study reveals that large language models (LLMs) can reinforce harmful stereotypes, with over 35% of responses on substance use disorders using stigmatizing language (1 Trusted Source
Stigmatizing Language in Large Language Models for Alcohol and Substance Use Disorders: A Multimodel Evaluation and Prompt Engineering Approach

Go to source
).
However, the study also reveals that using targeted prompts can significantly decrease the use of stigmatizing language in LLM responses, offering a potential solution to this issue.

TOP INSIGHT

Did You Know

Prompting matters. Without it, 35.4% of LLM responses had stigmatizing language-just 6.3% with prompt engineering. A game-changer for AI in healthcare. #MedIndia #LLMsInHealthcare #StigmaReduction #AIforHealth

Patient-Centered Communication: A Key to Better Health Outcomes

“Using patient-centered language can build trust and improve patient engagement and outcomes. It tells patients we care about them and want to help,” said corresponding author Wei Zhang, MD, PhD, an assistant professor of Medicine in the Division of Gastroenterology at Mass General Hospital, a founding member of the Mass General Brigham healthcare system.

“Stigmatizing language, even through LLMs, may make patients feel judged and could cause a loss of trust in clinicians.”

LLM responses are generated from everyday language, which often includes biased or harmful language towards patients.

Prompt engineering is a process of strategically crafting input instructions to guide model outputs towards non-stigmatizing language and can be used to train LLMs to employ more inclusive language for patients.

This study showed that employing prompt engineering within LLMs reduced the likelihood of stigmatizing language by 88%.

For their study, the authors tested 14 LLMs on 60 generated clinically relevant prompts related toalcohol use disorder (AUD), alcohol-associated liver disease (ALD), and substance use disorder (SUD). Mass General Brigham physicians then assessed the responses for stigmatizing language using guidelines from the National Institute on Drug Abuse and the National Institute on Alcohol Abuse and Alcoholism (both organizations’ official names still contain outdated and stigmatizing terminology).

Their results indicated that 35.4% of responses from LLMs without prompt engineering contained stigmatizing language, in comparison to 6.3% of LLMs with prompt engineering.

Additionally, results indicated that longer responses are associated with a higher likelihood of stigmatizing language in comparison to shorter responses.

The effect was seen across all 14 models tested, although some models were more likely than others to use stigmatizing terms.

AI Models Less Stigmatizing with Prompt Engineering

Future directions include developing chatbots that avoid stigmatizing language to improve patient engagement and outcomes.

The authors advise clinicians to proofread LLM-generated content to avoid stigmatizing language before using it in patient interactions and to offer alternative, patient-centered language options.

The authors note that future research should involve patients and family members with lived experience to refine definitions and lexicons of stigmatizing language, ensuring LLM outputs align with the needs of those most affected.

This study reinforces the need to prioritize language in patient care as LLMs become increasingly used in healthcare communication.

Speak with Empathy, Heal with Compassion

Reference:
  1. Stigmatizing Language in Large Language Models for Alcohol and Substance Use Disorders: A Multimodel Evaluation and Prompt Engineering Approach - (https://pubmed.ncbi.nlm.nih.gov/40705021/)


Source-Eurekalert



⬆️