Researchers should search beyond peer-reviewed journal publications for information on adverse effects associated with clinical trials.
The adverse effects (AE) of clinical treatments appear to be under-reported in peer-reviewed journal articles documenting the results of clinical trials, according to a new systematic review. "There is strong evidence that much of the information on adverse events remains unpublished and that the number and range of adverse events is higher in unpublished than in published versions of the same study," write Su Golder, PhD, FRSA, from the University of York, United Kingdom, and colleagues.
‘The extent of under-reported data prevents researchers, clinicians, and patients from gaining a full understanding of harm of the intervention, and this may lead to erroneous judgments on the perceived benefit or harm of an intervention.’
The researchers published the results of their study online in PLoS Medicine and was supported by the National Institute for Health Research. The authors note that serious concerns have emerged about publication bias and outcome reporting bias in clinical trials, which can lead to overestimation of treatment benefits and under-reporting of negative results.
Studies have found significant under-reporting of AEs in published trial data compared with unpublished data from the same trial. However, the extent of under-reporting of AEs of medical treatments in peer-reviewed journal articles remains unknown.
Dr Golder and colleagues conducted a systematic review to attempt to quantify the under-reporting of AEs in peer-reviewed publications documenting the results of clinical trials as compared with other, non-published sources. The authors also wanted to measure the effect of this under-reporting on systematic reviews of AEs.
The researchers searched several databases, as well as other sources, including hand-searching of key journals, unpublished studies, and the Cochrane library. They included studies in their review that quantified the reporting of AEs of any medical intervention in the published and unpublished formats.
Advertisement
Dr Golder and colleagues identified 28 studies from 31 publications that met the inclusion criteria.
Advertisement
Eleven studies performed matched comparisons of AEs in published and unpublished documents. "All the studies, without exception, identified a higher number of all or all serious adverse events in the unpublished versions compared to the published version," the authors write.
Specifically, if readers had relied only on published documents to evaluate clinical trials' data, they would have missed 43% to 100% (median, 64%) of AEs associated with medical treatments, including 2% to 100% of serious AEs.
Further, of 24 comparisons of named AEs, such as death or respiratory AEs, 18 showed that unpublished documents reported more of the named AEs than publications did. Two other studies also showed that matched unpublished documents reported substantially more types of AEs than published documents; in one of these two studies, 67.6% of the serious and 93.3% of fatal AEs reported in the unpublished company trial reports were not included in the published documents.
Dr Golder and colleagues note that they were able to find several examples of meta-analyses that included published AE data both with and without unpublished AE data. In most instances, inclusion of the unpublished data narrowed the 95% confidence intervals for the pooled risk estimate for an AE, but did not dramatically change the direction or magnitude of the risk. In several instances, however, inclusion of the unpublished data did make the pooled risk estimate statistically significant, whereas relying on published data alone made it appear statistically non-significant.
The authors acknowledge several limitations of this review and note that the included studies may themselves suffer from publication bias, whereby significant differences between published and unpublished data are more likely to be published.
Nevertheless, these findings suggest that under-reporting of AEs, selective outcome reporting, and publication bias represent significant threats to the validity of systematic reviews and meta-analyses of harms associated with medical treatments.
As a consequence, the authors say researchers should search beyond peer-reviewed journal publications for information on AEs associated with medical treatments. They also highlight the need for the drug industry to release full data on AEs to provide a more complete picture to healthcare providers, policy makers, and patients.
"Our findings suggest that it will not be possible to develop a complete understanding of the harms of an intervention unless urgent steps are taken to facilitate access to unpublished data," they conclude.
Source-Medindia