The recent economic stimulus package allocating $1.1 billion to comparative effectiveness research is generating buzz these days.
But a researcher at the Stanford University School of Medicine is asking policymakers to take a step back and make sure that the plans for comparative effectiveness research go deep enough to make a difference.
"The discussion that has taken place has been quite superficial and hasn't covered the range of changes that are needed for this type of research to be meaningful," said Randall Stafford, MD, PhD, associate professor of medicine at the Stanford Prevention Research Center.
The Obama administration sees comparative effectiveness research as a key strategy for reforming the nation's health-care system. The research would help identify the treatment options that are the most effective for a given condition. Many stakeholders, including health-care providers, consumer groups and professional organizations, have also expressed enthusiasm at the prospect of identifying new knowledge about how the effectiveness of one treatment compares with others.
Despite this potential, Stafford and collaborator Caleb Alexander, MD, assistant professor of medicine at the University of Chicago, highlight several challenges that must be met if comparative effectiveness research is to be useful in significantly improving the quality and affordability of health care. "This is really a plea to delve into the details, to get beyond the slogan of 'comparative effectiveness' and to not lose the momentum gained to date," Stafford said.
Stafford and Alexander's commentary, which will appear in the June 17 issue of the Journal of the American Medical Association
, outlines five ways to put more meat on the bones of the discussions surrounding comparative effectiveness research:
*Generate the data more rapidly. The pain reliever Vioxx is the best-known example of a drug originally aimed at a narrow patient population that became widely prescribed before evidence of harm was discovered. Ultimately, Vioxx was pulled from the market, but not before millions were exposed to these harms without substantial benefits. To prevent similar mishaps, Alexander said that obtaining comparative-effectiveness information earlier in the life of a new drug or device is a priority.
*Link the evidence to strategies proven to modify how physicians practice medicine. Simply making the data available to physicians and patients isn't enough. "Unfortunately, we still want to believe that information alone will change physician practice. Years of research, however, suggest there are more potent influences on physicians, including their local culture of practice," Stafford said.
*Broaden the agenda beyond drugs and devices. "It can't just be a comparison of this drug vs. that drug," Stafford said. "This misses important aspects of practice and ends up exempting high-cost procedures from scrutiny." Researchers should focus on comparisons that include lifestyle modifications, such as diet and exercise, as well as alternative therapies that patients often implement on their own. In addition, research is needed on the most effective ways of delivering care. For instance, some studies show better chronic disease outcomes with nurse case managers compared with physicians working alone.
*Alter the regulatory environment. "Comparing a new drug against placebo doesn't make much sense if our goal is to compare different clinical strategies," said Stafford, noting that placebo-controlled trials are the standard for drug approval by the U.S. Food and Drug Administration. The threshold must be raised for comparative effectiveness to work, he said. Stafford and Alexander suggested that if a new medication isn't tested head-to-head against similar drugs, its labeling could be changed to say, for instance, "This drug has not been found to be superior to the other calcium-channel blockers in the treatment of hypertension." This requirement would provide useful information to patients and physicians, as well as give manufacturers an incentive to perform more drug vs. drug clinical trials.
*Consider the cost implications. This is controversial because many fear that it may lead to restrictions on higher-cost treatments, regardless of the treatment's effectiveness. Some proponents of comparative effectiveness research have suggested not including cost as a factor. But as Stafford and Alexander write in their commentary, "What good is comparative effectiveness if it cannot be used to discern anything about value to clinicians, insurers, patients and society?"
The discussions surrounding how to implement comparative effectiveness research data into the health-care reform effort are still in the early stages, which is why Stafford and Alexander hope their commentary will prod policymakers to ensure that the discussions are as comprehensive as possible.
Stafford said previous reform attempts, such as the drive to develop clinical guidelines in the 1990s for treating specific illnesses, failed because "our approach to implementing them was simplistic and not sophisticated enough. Unless we get it right with comparative effectiveness, it's at risk of a similar fate."
Stafford and Alexander support efforts to help physicians and patients make better use of research results in determining which drugs, devices and other treatment options are the most effective. "The drive for comparative effectiveness has tremendous appeal. Who could argue against the idea of generating knowledge about what works and what doesn't?" said Alexander. But they say broader changes are needed in the health-care system — including the FDA's process for approving new medications and devices — to yield the right kind of data for such comparisons, and to ensure that patients, physicians and medical organizations make the wisest possible use of their health-care dollars.
"Unless we start spending our resources more efficiently, our health-care system won't survive, let alone fully cover all of the people who are now uninsured or underinsured," Stafford said.