Some kinds of infection data may not be suitable for use as a performance measure and may not be comparable across hospitals, says a new study.
Published in the leading US policy journal Milbank Quarterly, the research found huge variability in how English hospitals collected, recorded and reported their rates of central line infections to a patient safety programme. The study was funded by the Health Foundation, a major UK charitable foundation aiming to improve quality of care.
"Central line infections occur in tubes used in treating seriously ill patients. These infections are largely preventable, and hospitals need to be able to monitor how well they are doing in controlling them," said study author Professor Mary Dixon-Woods from University of Leicester. "But because hospitals don't use the same methods to generate the data, using their reported rates to produce league tables of performance or to impose financial sanctions, as happens in the US, is probably not appropriate."
"Although hospitals were given clear, standardised definitions to use, many laboratories did not have the tools to make the most definitive assessment of where infections were coming from in the patient's body," said Julian Bion, Professor of Intensive Care Medicine at University of Birmingham. "That meant they had to use clinical judgement to decide whether any infection was due to a central line. Once you are relying on judgement, you will get variation."
The study dismissed 'gaming' as the explanation for the variations the researchers found. "Some previous studies have reported deliberate manipulation of performance data to 'look good'", said Professor Dixon-Woods. "We found very little evidence of this. We did find that doctors were fine with making a clinical decision that a patient might have a central line infection for purposes of treating them, but they wanted better evidence if they were going to report an infection externally. That meant that sometimes a patient who was treated as having an infection while in hospital was not counted as having an infection when data were reported. Doctors also varied in how many blood samples they sent off for laboratory analysis".
Professor Bion commented that these findings have important lessons for those involved in improving the quality of patient care. He said: "The study shows that if we are going to use data produced by hospitals to guide quality improvement, data collection systems must be carefully designed and operated, fully integrated with clinical priorities, and impose minimum burden. We need data that clinicians and organizations believe in. And until we have that, we need to be very careful about comparing performance across hospitals. We certainly need to avoid basing pay-for-performance schemes on central line data, because it's now becoming clear that these schemes have many unintended consequences."
Dr Elaine Maxwell, Assistant Director at the Health Foundation, commented: " This research highlights the very real difficulties in measuring safety in healthcare. Adverse events such as infections are increasingly being used in performance management. This study demonstrates that accurate data collection is more complex than may have at first been imagined. The Health Foundation is continuing to support the development of reliable and sensitive safety measures to reflect sustained improvement."