More than one in three group-based cancer trials used a flaw statistical methodology to study the effects of an intervention, according to a study published Tuesday that reviewed 75 articles.
The review found that 34 articles, or 45 percent, used "appropriate" methods to analyze the results of trials of randomly-picked individuals, while 26 articles, or 35 percent, used "inappropriate" methodology, the researchers said in the Journal of the National Cancer Institute.
Advertisement"We cannot say any specific studies are wrong. We can say that the analysis used in many of the papers suggests that some of them probably were overstating the significance of their findings," said David Murray, lead author of the review and chair of epidemiology in the College of Public Health at Ohio State University.
Eight percent of reviewed articles used a combination of appropriate and inappropriate methods while nine articles did not have enough information to judge if the methodology was appropriate or not, the researchers said.
The researchers reviewed 75 articles in 41 medical journals from 2002-2006.
Of the studies that used inappropriate methods, 88 percent reported statistically significant intervention effects that could be misleading to scientists and policy-makers due to analysis flaws, the reviewers said.
"In science, generally, we allow for being wrong five percent of the time," Murray said in a statement.
"If you use the wrong analysis methods with this kind of study, you might be wrong half the time. We're not going to advance science if we're wrong half the time."
But he emphasized that the studies did not willfully use inappropriate methods or try to skew results of a trial.
Murray and his colleagues urged researchers to collaborate with statisticians familiar with studies of groups in which individuals are randomly chosen.
"Am I surprised by these findings? No, because we have done reviews in other areas and have seen similar patterns," Murray said.
"It's not worse in cancer than anywhere else, but it's also not better. What we're trying to do is simply raise the awareness of the research community that you need to attend to these special problems that we have with this kind of design."