Being Negative Is Not So Bad

A new study published in PLOS Biology looks at the potential magnitude and effect of publication bias in animal trials. Essentially, the authors conclude that there is a significant file-drawer effect – failure to publish negative studies – with animal studies and this impacts the translation of animal research to human clinical trials.

SBM is greatly concerned with the technology of medical science. On one level, the methods of  individual studies need to be closely analyzed for rigor and bias. But we also go to great pains to dispel the myth that individual studies can tell us much about the practice of medicine.

Reliable conclusions come from interpreting the literature as a whole, and not just individual studies. Further, the whole of the literature is greater than the sum of individual studies – there are patterns and effects in the literature itself that need to be considered.

One big effect is the file-drawer effect, or publication bias – the tendency to publish positive studies more than negative studies. A study showing that a treatment works or has potential is often seen as doing more for the reputation of a journal and the careers of the scientists than negative studies. So studies with no measurable effect tend to languish unpublished.

Individual studies looking at an ineffective treatment (if we assume perfect research methodology) should vary around no net effect.  If those studies that are positive at random are more likely to be published than those studies that are neutral or negative, than any systematic review of the published literature is likely to find a falsely positive effect.

Of course, we do not live in a perfect world and many studies have imperfect methods and even hidden biases. So in reality there is likely to be a positive bias to the studies.  This positive bias magnifies the positive publication bias.

There are attempts in the works to mitigate the problem of publication bias in the clinical literature. For example, is a registry of all trials involving human subjects – before the trials are completed and the results known. This way reviewers can have access to all the data – not just the data researchers and journal editors deem worthy.

This new study seeks to explore if publication bias is similarly a problem with animal studies. The issues are similar to human trials. There is an ethical question, as sacrificing animals in research is justified by the data we get in return. If that data is hidden and does not become part of the published record, than the animals were sacrificed for nothing.

And also, publication bias can lead to false conclusions. This in turn can, for example, lead to clinical trials of a drug that seems promising in animal studies. This could potentially expose human subjects to a harmful or just worthless drug that would not have made it to human trials if all the negative animal data were published.

The study itself looked at a database of animal models of stroke. They examined 525 publications involving 16 different stroke interventions. There are a few different types of statistical analysis that can be done to infer probable publication bias. Basically, without publication bias there should be a certain distribution of findings in terms of effect sizes. If only positive or larger effect sizes are being published, then the distribution will be skewed.

This type of analysis provides an estimation only. They found that:

Egger regression and trim-and-fill analysis suggested that publication bias was highly prevalent (present in the literature for 16 and ten interventions, respectively) in animal studies modelling stroke. Trim-and-fill analysis suggested that publication bias might account for around one-third of the efficacy reported in systematic reviews, with reported efficacy falling from 31.3% to 23.8% after adjustment for publication bias. We estimate that a further 214 experiments (in addition to the 1,359 identified through rigorous systematic review; non publication rate 14%) have been conducted but not reported. It is probable that publication bias has an important impact in other animal disease models, and more broadly in the life sciences.

So there was some disagreement between the methods used, but both showed that there is likely to be a significant publication bias. If their analysis is correct, about one third of systematic reviews of animal studies in stroke that conclude an intervention works may be due to publication bias rather than a real effect. The authors also speculate that this effect is likely not unique to stroke, and may be generalizable to animal studies in general.

Of course, this is just an individual study, and further analysis using different data sets are needed to confirm these results.


The results of this study are not surprising and are in line with what is known from examining clinical trials. They suggest that similar methods to minimize publication bias are necessary for animal studies in addition to human trials.

Hopefully, this kind of self-critical analysis will lead to improvement in the technology of medical research. It should further lead to more caution in interpreting not only single studies but systematic reviews.

Also, in my opinion, it highlights the need to consider basic science and plausibility in evaluating animal and clinical trials.

Posted in: Science and Medicine

Leave a Comment (11) ↓

11 thoughts on “Being Negative Is Not So Bad

  1. daedalus2u says:

    This also shows the actual damage that is occurring to the scientific literature due to the way that science research is funded, via a competition that punishes the publication of negative studies.

  2. Kausik Datta says:

    The paper in PLoS Biology is important indeed, and highlights several genuine concerns regarding publication bias, that exists not only for scholarly journals, but funding agencies as well.

    Yesterday, I went through this paper having learnt about it via a Nature News report. The actual PLoS Biology study looked at a small subset of animal experimentation, those involving studies on ischemic strokes and their management, and speculated about their findings being possibly occurring in other areas of biology research as well.

    However, the Nature News journalist who wrote that up chose to sensationalize the title by writing “Animal studies paint misleading picture” – a broad title which has rather unfortunate connotations, and which, in all probability, will become a rallying point for the committed anti-animal experimentation folks. I posted a critique of that report in my blog elsewhere.

  3. baldape says:

    It seems that science journals are doing it exactly backwards. Submissions to journals should be papers which describe an experiment to be done (or perhaps currently underway), and acceptance should be based on a review of methods and analysis to be performed.

    Once a submission is accepted, the journal should print a “teaser”, basically indicating the type of experiment and anticipated timeline for the results; naturally, it should publish the results, positive or negative, when they are available.

  4. Ash says:

    baldape – I like your idea (though I suspect the journals wouldn’t go for it). An added benefit would be peer review of the methodology before the experiment even begins, which might identify any shortcomings in the design in time to actually correct them.

  5. Kausik Datta says:

    Peer review of the methodology is an intriguing concept, but I doubt it will ever be workable, because of valid cross-concerns about being scooped by competing groups.

  6. daedalus2u says:

    Concerns about being “scooped” are due to the pathological devotion of journals, scientists, and (most important) science funding agencies to “priority” so as to introduce ever more competition into science funding.

    There is no shortage of important things to do research on!

    There won’t be a shortage of important things to do research on for centuries at least!

    Why is scientific research treated as if it is a zero-sum game?

    The only reason is because in a zero-sum game there are clear winners and clear losers, and no one wants to be a loser or to fund a loser, or to associate with a loser.

    The problem is we don’t know what is really important until years later, and then it often isn’t what the glossy glamor mags publish with their hyped-up and misleading headlines.

  7. windriven says:


    “Why is scientific research treated as if it is a zero-sum game?”

    Good science doesn’t come cheap. Those who fund research expect to have a reasonable opportunity to benefit from their investment. That is why, for instance, states have patent laws. It encourages individuals and corporations to work on new ideas.

  8. TKW says:

    It occurs to me that getting scooped might be less of a concern if the journal made it clear that once the description and methodology of the research to be carried out was accepted for publication, all subsequent experiment proposals submitted should be viewed in the context of this initial work. Even if someone else approached the problem from a different angle, trying to shortcut the process, they would have to submit their proposed experimental methodology to a journal.

    I’m under the impression that present journals already check submissions for originality with regard to the already published literature. Would it be too much of a stretch to adopt this procedure for methodology submissions?

    Something like, “Okay, you’ve submitted a set of experiments you’re going to do. The reviewers have drawn our attention to the fact that half of what you’re planning to do is being done right now somewhere else and the other group has already published their methodology. If you’d like to go back and rethink your approach from another angle and come up with another set of experiments that’s original, we’ll publish your methodology.

    But as of now, sorry, that’s not new work.”

    Could that work? I mean, it would encourage more lines of evidence and cut down on the duplicated work. Right now, when everyone’s kept in the dark about other people’s ongoing work, intentional scooping happens infrequently, but unintentionally I imagine that there’s duplication going on?

  9. BillyJoe says:

    “There are a few different types of statistical analysis that can be done to infer probable publication bias. Basically, without publication bias there should be a certain distribution of findings in terms of effect sizes. If only positive or larger effect sizes are being published, then the distribution will be skewed.”

    For example, “funnel plots”.

  10. JMB says:

    I think that is what the journal, “Medical Hypotheses”, was supposed to do. Here is a link:

    “Medical Hypotheses” was the subject of another post on this site,

    Perhaps, “Medical Hypotheses” could revise its approach to use peer review to decide if there had been enough prior study to refute a hypothesis submitted for publication. Alternatively, a section of the journal could be devoted to rejected hypotheses with included rebuttal subject to peer review. That way the author of the rebuttal could receive credit for publication in a journal subjected to peer review.

  11. Just thought I’d mention. The other day I was perusing a book on pain in infants, children and adolescents. The book was basically on managing pain, different techniques for discerning the level of pain at different developmental ages, etc.

    One chapter talks about cultural differences. Examining example of how different cultural attitudes toward pain could effect the efficiency of treatment of pain. For example if one culture would tend to under report pain that would have an effect.

    They mentioned the file drawer effect, on studies looking for a connection between pain management/reporting and culture.

    Because I read this blog, I actually knew what they were talking about. Thanks Dr. N

Comments are closed.