Clinical practice guidelines and systematic reviews of the evidence base for health care services are supposed to offer health care providers, patients, and organizations authoritative guidance on the comparable pros and cons of various care options, but too often they are of uncertain or poor quality. There are no universally accepted standards for developing systematic reviews and clinical practice guidelines, leading to variability in the handling of conflicts of interest, appraisals of evidence, and the rigor of the evaluations. Two new reports from the Institute of Medicine recommend standards to enhance the quality and reliability of these important tools for informing health care decisions.
Clinical Practice Guidelines We Can Trust recommends eight standards to ensure the objective, transparent development of trustworthy guidelines. Several problems hinder providers' and others' ability to determine which among thousands of sometimes competing guidelines offer reliable clinical recommendations. Finding What Works in Health Care: Standards for Systematic Reviews recommends 21 standards to ensure objective, transparent, and scientifically valid reviews. Poor quality reviews can lead clinicians to the wrong conclusions and ultimately to inappropriate treatment decisions.
"This report presents the 'gold standard' to which those who conduct systematic reviews should aspire to achieve the most reliable and useful products," said Alfred O. Berg, professor of family medicine, University of Washington School of Medicine, Seattle, and chair of the committee that wrote the report on systematic reviews. "We recognize that it will take an investment of resources and time to achieve such high standards, but they should be adopted to minimize the chances that important health decisions are based on information that may be biased or erroneous."
To prevent actual or perceived conflicts of interest from eroding trust in clinical practice guidelines, members of guideline development groups should not have intellectual, institutional, financial, or other forms of conflicts whenever possible, says the guidelines report. However, if a group cannot perform its work without conflicted individuals, they should make up only a minority of the members. Those who fund guideline development work should have no role in the development process. Similarly, individuals with clear financial conflicts of interest as well as those with professional or intellectual biases that would lessen an evaluation's credibility should be excluded from the teams that conduct systematic reviews, the report on reviews says.
Getting input from consumers, health professionals, insurers, and other intended users can boost the quality of reviews and guidelines and make them more relevant. Guideline development groups should include a current or former patient and a patient advocate or representative of a consumer organization. Systematic reviews should include a method to collect information from individuals with relevant perspectives and expertise. Individuals providing input should publicly acknowledge their potential biases and conflicts and be excluded from the process if their participation would diminish the evaluation's credibility.
People expect clinical practice guidelines to provide an accurate, fair account of the potential benefits and harms of various health care options and they expect systematic reviews to provide a complete picture of all that is known about an intervention. Because guideline developers often have to make subjective judgments about evidence, especially when it is low-quality or limited, they should explicitly describe the part that value judgments, theory, or clinical experience played in their recommendations, the guidelines report says. They should explain the reasoning underlying each recommendation they make, including their assessment of the quality, completeness, and consistency of the available evidence. Teams conducting systematic reviews should not limit their evaluations to the published literature or large databases because negative findings sometimes go unpublished and these tools provide only a partial picture of the evidence, the report on reviews says. Reviewers should seek out relevant unpublished information. And they should clearly describe the team's methodology, selection criteria, and assessment of the evidence, including what remains unknown about the topic.