A new study published in JAMA reveals that estimating the treatment outcomes in meta-analyses depends significantly on the method of analysis strategy used.

Agnes Dechartres, M.D., Ph.D., of the Centre de Recherche Epidemiologie et Statistique, INSERM U1153, Paris, and colleagues compared treatment outcomes estimated by meta-analysis of all trials and several alternative strategies for analysis: single most precise trial (i.e., trial with the narrowest confidence interval), meta-analysis restricted to the 25 percent largest trials, limit meta-analysis (a meta-analysis model adjusted for small-study effect), and meta-analysis restricted to trials at low overall risk of bias. The researchers included 163 meta-analyses published between 2008 and 2010 in high-impact-factor journals and between 2011 and 2013 in the Cochrane Database of Systematic Reviews: 92 (705 RCTs) with subjective outcomes and 71 (535 RCTs) with objective outcomes.
The researchers found that treatment outcome estimates differed depending on the analytic strategy used, with treatment outcomes frequently being larger with meta-analysis of all trials than with the single most precise trial, meta-analysis of the largest trials, and limit metaanalysis. The difference in treatment outcomes between these strategies was substantial in 47 of 92 (51 percent) meta-analyses of subjective outcomes and in 28 of 71 (39 percent) meta-analyses of objective outcomes. The authors did not find any difference in treatment outcomes by overall risk of bias.
"In this study, we compared meta-analysis of all trials with several 'bestevidence' alternative strategies and found that estimated treatment outcomes differed depending on the strategy used. We cannot say which strategy is the best because … we cannot know with 100 percent certainty the truth in any research question. Nevertheless, our results raise important questions about meta-analyses and outline the need to rethink certain principles," the researchers write.
"We recommend that authors of meta-analyses systematically assess the robustness of their results by performing sensitivity analyses. We suggest the comparison of the meta-analysis result to the result for the single most precise trial or metaanalysis of the largest trials and careful interpretation of the meta-analysis result if they disagree."(doi:10.1001/jama.2014.8166;
Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.
Advertisement
Jesse A. Berlin, Sc.D., of Johnson & Johnson, Titusville, N.J., and Robert M. Golub, M.D., Deputy Editor, JAMA, write in an accompanying editorial that "findings such as those in the study by Dechartres et al reinforce concerns that journals and readers have about metaanalysis as a study design. Those findings deserve consideration not only in the planning of the studies but in the journal peer review and evaluation. They also reinforce the need for circumspection in study interpretation."
Advertisement
Editor's Note: The authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.
Source-Eurekalert