Articles

Annals of Questionable Evidence: a new study reveals substantial publication bias in trials of anti-depressants

Part IV of the ongoing Homeopathy series will have to wait a day or two, because it is superceded by a recent, comment-worthy publication. Nevertheless, “H series” fans will find here a bit of grist for that mill, too.

An important role for this blog is to discuss problems of interpreting data from clinical studies. Academic medicine has committed itself, on the whole, to scientific rigor—to the extent that this is possible in messy, clinical (especially human) trials. Several tools have been proposed, and to a varying extent used, to enhance the rigor of clinical research and the reporting of clinical research. One of those tools is the registering of clinical trials prior to recruiting subjects. Registration would stipulate a trial’s a priori hypothesis(es), design, planned endpoints, and planned statistical methods, among other things. This would guard against several problems: publication bias—the tendency for some trials, usually “negative” ones, to go unreported; selective reporting of the results of a trial, if some are pleasing but others are not; and post hoc data analysis—finding data after the fact to suggest a novel hypothesis that will falsely be portrayed as an a priori hypothesis. Publication bias is also known as “selective publication” or the “file drawer problem”; post hoc analysis is also known as “data dredging” or “HARKing” (Hypothesizing After the Results are Known).

An article in the Jan. 17 issue of the New England Journal of Medicine demonstrates the usefulness of a trial registry:

Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy

Erick H. Turner, M.D., Annette M. Matthews, M.D., Eftihia Linardatos, B.S., Robert A. Tell, L.C.S.W., and Robert Rosenthal, Ph.D.

For years, the FDA has required Investigational New Drug (IND) applications from the sponsors of trials intended to support FDA-approval to market new drugs or to market old drugs with new indications. The IND application includes the proposed trial’s protocol. When the trial is finished, the sponsor must submit a New Drug Application (NDA) that includes both the raw data from the trial and the investigators’ conclusions. The FDA conducts its own analysis of the data and compares that to the conclusions of the investigators. These applications thus fulfill several of the requirements necessary to accomplish the policing function described above.

In the past, the information that the FDA received was not made public. Recently that has changed, and Turner and co-authors took advantage. They looked at 74 FDA-registered trials of 12 anti-depressant agents that eventually won approval, involving 12,564 subjects, conducted between the years 1987 and 2004. They compared the FDA findings with the published reports of the same trials. What they found was striking: of the 37 studies that the FDA viewed as positive, 36 were published. But of the 35 studies that the FDA viewed as “having negative or questionable results,” 22 were never published and 11 were “published in a way that, in our opinion, conveyed a positive outcome.” All in all, “According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive.”

The authors could not tell whether the publication bias was due more to investigators not submitting manuscripts or more to journals rejecting those manuscripts. Other work suggests that the former is the predominant reason. The Turner study was also unable to discern whether industry-sponsors of the trials may have influenced decisions to submit manuscripts. Other evidence, including the infamous Olivieri and Vioxx cases, suggests that this may have been a significant factor. Some research contracts between academic investigators and their industry-sponsors have included “gag clauses” requiring investigators to obtain permission from sponsors before submitting manuscripts.

Non-publication of human research is unethical. The most important reasons are that it exploits human subjects for no purpose, and that by skewing data it may encourage the medical profession and the public to pursue inferior or even dangerous treatments. Many in academic medicine consider it a form of scientific misconduct. Publication bias is only one of many reasons that industry-sponsored research may yield erroneous conclusions. To be fair, another recent study found that this may be less pervasive than some think.

Science and Clinical Research

There are many other pitfalls in clinical research. Among the most important are the nearly universal misuse of “frequentist” statistics and its partner-in-crime, the lack of formal estimates of prior probability. Citing such pitfalls together with publication bias and other biases, a learned author has argued persuasively that “Most Published Research Findings are False.” This by no means supports the Post-Modern conclusion, to which a few readers might be drawn, that nothing can really be “known” in an objective, externally-verifiable way. On the contrary: some published research findings are true, and the scientific weeding process inevitably identifies them. Thus we know many objective, externally verifiable truths about health and disease, most of which we have learned only in the past 150 years or so—probably less than 1% of the time that our species has had enough intelligence to do so.

We know a lot about pulmonary, cardiac, endocrine, gastrointestinal, renal, and neurophysiology. We know that there is an immune system and we have devised ways of inducing it to protect us from many of the plagues that afflicted our less knowledgeable ancestors. We know how to fashion molecules that can interact with physiologic processes; we already have quite a few that are useful, and we will develop many more. We can even predict with some certainty which conditions will be amenable to future pharmacologic interventions (obesity) and which will not (Down’s syndrome). We know exactly what it is that gives someone sickle cell disease, although we haven’t yet figured out how to cure it. We know that the production of a special glycoprotein by the parietal cells in the gastric mucosa is necessary for the absorption of dietary vitamin B-12. We know that the absence of that protein, if undiagnosed and untreated, can lead to anemia, severe neurologic problems, and dementia, and we know how to reverse those sequellae. We know that malaria is caused by certain protozoa and that mosquitoes are their vectors. We know that quinine cures malaria by killing those protozoa, and we know how quinine does this. We know that many of those protozoa are now resistant to quinine, and we know that those resistant strains have flourished because of selection pressure caused by us. We know that AIDS is caused by an RNA virus, we know how it spreads, and we can suppress it to a considerable extent in most patients, even if we can’t yet eradicate it.

In addition to those few, arbitrary examples, we know vastly more about human biology and medicine. We know enough to fill numerous, thick textbooks, and we are learning at breakneck speed—even if we are still, by any reasonable estimate, in the early stages of discovery. The goal for clinical research, then, is to strive for scientific rigor. The author of the article cited above offers several suggestions to “improve the situation,” and we on this blog will be discussing those and more.

Posted in: Clinical Trials, Medical Academia, Medical Ethics, Science and Medicine

Leave a Comment (5) ↓

5 thoughts on “Annals of Questionable Evidence: a new study reveals substantial publication bias in trials of anti-depressants

  1. daedalus2u says:

    I think this is a problem that comes from having editors of journals and scientific peers being the only gate keepers of what is allowed to go into the scientific literature. Keeping crap out of the scientific literature is important, but keeping non-crap out because the editors and peers don’t like it is a bad idea.

    I see the scientific literature as a communication from one scientist to another scientist. Nothing less, and nothing more. Trying to use the scientific literature for other things (such as for marketing drugs) is to misuse it and to turn it into something it isn’t good at.

    I think the emphasis put on “impact factors” by the journals is inappropriate self-promotion. There is a place in the literature for new studies which replicate other work. With electronic searching now, the whole breadth of the literature can be searched.

  2. Simon says:

    daedalus2u, I see what you are saying- it would be interesting to see how a Wiki style journal would pan out. People could upload their own results and experiments and have them peer reviewed by other registered users. Decisions would have to be made as to who could do such reviews- it could be as strict as an appointed board of reviewers or an invitation system, or as loose as allowing anyone to review the data. An intermediate level of security could require only those with registered qualifications in the subject to be involved.

    This would inevitably invite much chaff but, like other Wikis, some interesting wheat might rise to the top that would have gone ignored in traditional journals. A massive danger would be providing credibility to unscientific research supported by reviewers with vested interests.

Comments are closed.