The method used to measure the quality of hospital care is prone to bias and is potentially misleading, according to a study published on bmj.com today.
The hospital standardised mortality ratio (SMR) is used to measure the quality and safety of hospital care in the United Kingdom and around the world. The ratio identifies hospitals where more patients die than would be expected on the basis of their case mix ('bad' hospitals) and hospitals with fewer deaths than expected ('good' hospitals).
AdvertisementThe validity of this ratio has been criticised because it may not adequately adjust for case mix or account for measurement errors between hospitals. Yet it continues to be used to compare quality of care.
Prompted by these concerns, researchers at the University of Birmingham analysed the methods used by Dr Foster Intelligence, a private-public partnership company which annually publishes SMR league tables for English hospitals.
They explain that, to be valid, case-mix adjustment requires relationships between case-mix variables (e.g. patient age, diagnosis, emergency admissions) and mortality to be constant across all the hospitals being compared for correct adjustments to be made. When this requirement is not met, case-mix adjustment is prone to bias and is potentially misleading - this phenomenon is called the "constant risk fallacy."
They examined seven variables routinely used by Dr Foster in four acute NHS hospitals in the West Midlands with case-mix adjusted SMRs ranging from 88-140.
They found that three variables - age, gender and deprivation - were not prone to the constant risk fallacy. However, large significant interactions were seen for the other four variables - emergency admission, comorbidity, primary diagnosis, and the number of previous emergency admissions in the last year.
For two variables - comorbidity and emergency admission - the researchers found credible evidence to suggest that they were prone to the constant risk fallacy due to differences in clinical coding and admission practices across hospitals. These two variables are therefore unsafe to use in case-mix adjustment because their inclusion may actually increase the very bias that case-mix adjustment aims to reduce, say the authors.
They conclude: "Our findings suggest that the current Dr Foster method is prone to bias and any claims that variations in hospital SMRs reflect differences in quality of care are less than credible."
These findings undermine the credibility of standardised mortality ratios and indicate that their role in labelling hospitals as good or bad is unjustified, says John Wright from the Bradford Institute for Health Research, in an accompanying editorial. Publicly reported quality measures require accuracy and precision to prevent unfair stigmatisation and loss of trust.
Rather than advocating the primacy of one quality metric over another, he suggests we should continue to identify a range of indicators that are appropriate for different clinical contexts. We should then concentrate on how these can support internal learning rather than legitimising unfounded and often counterproductive external judgements, he concludes.