Articles

Mouse Model of Sepsis Challenged

A recent study published in the Proceedings of the National Academy of Sciences calls into question the standard mouse model of sepsis, trauma, and infection. The research is an excellent example of how proper science investigates its own methods.

Mouse and other animal models are essential to biomedical research. The goal is to find a specific animal model of a human disease and then conduct preliminary research on the animal model in order to determine which research is promising enough to study in humans. There are also non-animal assays and “test tube” type research that is used to screen potential treatments, but scientists still prefer a good animal model.

It is also understood that animal models are imperfect – mice are not humans, after all. Animal research is therefore not a substitute for human research. I and other SBM authors have regularly criticized proponents of dubious treatments who make clinical claims based upon preliminary animal research. Until something is studied in humans, we cannot make any reliable claims about its safety and efficacy in people.

I would also point out that it is highly problematic to discuss the utility of animal models in general. Each animal model needs to be considered by itself, in terms of how predictive it is, and for what. For example, there are SOD1 mice who have a mutation that causes familial ALS (FALS). This mouse model has served as a screen for many potential ALS treatments. It turns out that the SOD1 mouse is an excellent animal model for SOD1 FALS in humans. This is not surprising considering that it is the same mutation. However, the SOD1 mouse model of ALS is much less predictive for sporadic ALS. Most of the drugs that look promising in the SOD1 model have not shown a significant clinical effect in humans with sporadic ALS (only one drug has actually made it through FDA approval: riluzole).

The question investigated by the authors of the current study is this – how good is the current mouse model of sepsis (infection in the blood), trauma, and other infections? The deeper question is – how similar is the mouse immune system to the human immune system.

The authors first questioned the mouse model after studying the pattern of gene activation in patients suffering from these three conditions, each of which cause inflammatory stress on the body. They found that the pattern of genetic response was similar across all these conditions, pointing to a common inflammatory response to stress. Their research was criticized, however, for not including mouse data to back it up. So they conducted follow up research looking at the pattern of gene activation to these stress conditions in mice.

What they found surprised them – mice had a different pattern of genetic activation in each of the three conditions, and all of them were different from the response found in humans. In other words, the mouse immune system responds differently to various kinds of stress than the human immune system.

The authors were quick to recognize the implications of their findings: the mouse model of inflammatory stress reactions is likely worthless or even misleading. According to their research, it should not be used as a preliminary indicator of human research into these conditions.

The data seems fairly robust and the conclusions valid. We still need time for the study to be digested by the scientific community, and perhaps for some follow up research to be conducted, but even with this one study it seems that researchers should think carefully about using mice to research potential treatments in these conditions.

Media reporting about this research was generally good, probably because the story is pretty juicy as it is, but as usually ramped up the drama a notch or two. It is common for researchers themselves to overestimate the importance and impact of their own research. Often they exaggerate prior ignorance and resistance to their findings in order to magnify the apparent effect of their new findings. The media will tend to focus on this aspect of a science new story, ramping up the drama even further.

The New York Time reporting on this study falls into this pattern, in my opinion. The author, Gina Kolata, makes it seem like the authors of the current study are the first to challenge the mouse model of inflammation, and were met with irrational resistance. She offers as evidence the fact that it took the authors a year to get past peer-review and that they were turned down by Nature and Science before being published by PNAS.

This is not unusual at all, however. Taking a year to get published is actually pretty good, and no one should cry about being turned down by two top-tier journals before being published in a third. This is not a sign of resistance, but more like standard procedure.

Further, there are previously published articles also expressing skepticism of the mouse model. This 2012 editorial, published in Nature Medicine and not by one of the authors of the current study, has the title: Rodent model of sepsis found shockingly lacking. This examination from 2008 also concludes that “Based on these criteria, the ideal model of sepsis does not exist.” This review concluded that there is utility to animal models, but they have significant weaknesses that need to be understood.

None of this is to imply that the current study is not an advance – the data on differences in genetic activation is very useful and does change the overall assessment of the utility of mouse models of inflammatory stress such as sepsis. I disagree with the impression given in reporting, however, that this notion comes out of the blue, is fundamentally different than prior attitudes, and was met with unwarranted resistance.

Conclusion

Animal models of disease are a vital technology to biomedical research, but they are a highly variable and problematic technology. This is well known to researchers, however, who seem to spend a sufficient amount of time questioning and studying the animal models themselves.

The current research appears to be a significant blow to the current mouse models of inflammatory stress, but also contains a wealth of information about which genes becomes active in humans and mice in response to such stress. This potentially can lead to other useful models or biomarkers, and also to new treatments in this very challenging area of medicine.

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (29) ↓

29 thoughts on “Mouse Model of Sepsis Challenged

  1. David Gorski says:

    Yeah, I must admit that I laughed when I read the part about the authors whining that their paper didn’t get accepted to Nature or Science. I really did. Poor babies! Their paper didn’t get into one of the two highest tier journals and they had to settle for a second tier but still very excellent and highly respected journal! The injustice of it all!

    As for its taking a year to get published. Geez! As Steve said, that’s actually pretty good. I’ve had papers it took me two years to get published (one took over two years and was submitted to four or five different journals before it was accepted), and I still have data that is at least five years old that I haven’t been able to get published yet. This sort of thing is very common in science (and even more common in medicine, where more clinically-oriented journals frequently take a year or so to get a paper published). It doesn’t mean that there’s some sort of conspiracy to keep the data from seeing print or that some sort of dogmatism is preventing the brave mavericks from from publishing, as Kolata implied. I also note that we have no way of verifying that the authors’ characterizations of the reviews they got from Nature and Science were as they say they were. I bet there was more there than just, “I don’t believe this.” I’ve never seen a review in which the reviewer says he doesn’t believe a result but doesn’t spell out why.

    The other thing about PNAS: Although it’s not as bad as it was in the past as far being a dumping ground for odds and ends from the labs of National Academy of Sciences (NAS) members, it’s still the ultimate scientific “old boys” network. It used to be that if you were a member of the NAS you could publish almost anything you wanted in PNAS, and NAS members regularly used it to publish data they couldn’t get published elsewhere. Indeed, Linus Pauling infamously did just that around 30 years ago when he published his crappy studies suggesting high dose vitamin C as a treatment for cancer in PNAS. Peer review for NAS member-submitted manuscripts was almost nonexistent. In fairness, of late PNAS appears to have tightened that up considerably and instituted something resembling real peer review, but even so from what I’ve heard NAS members appear still to be given a lot of deference.

    Maybe someone with more recent experience with PNAS could tell me what the peer-review and publication process is like these days.

  2. windriven says:

    All of which shows the clear superiority of alternative medicine to allopathic claptrap. Mice indeed; mice don’t even have qis*. What’s next, monkeys and pigs as human models?

    Mainstream “scientists” are forever changing their tunes about what works and what doesn’t. But name even one instance where homeopathy has had to change a preparation or reiki has had to change a procedure! It is true that a few chiropractors have strayed from the foundational concept of subluxation, but these are apostates who poison the purity of chiropractic thought.

    * I was going to do a long riff on qi and cheese here but thought better of it. You can express your thanks by making a contribution to JREF.

  3. windriven says:

    @Dr. Gorski

    ” I’ve had papers it took me two years to get published (one took over two years and was submitted to four or five different journals before it was accepted), and I still have data that is at least five years old that I haven’t been able to get published yet.”

    Is this state of publishing desirable or even sustainable as the rate of change in medical knowledge continues to accelerate?

  4. mousethatroared says:

    Ha – for the last week or so I’ve been attending a GoogleU “class”* on complements, immunity and inflammation, particularly c3 and c4 levels. I carefully skipped all the mouse models articles with the rational of ‘hell, I can barely understand it when they are talking about people, what the heck would I do with mouse information.”

    It’s really absurd how vindicated I feel reading this article.

    *otherwise know as googling various terms that seem relevant clicking on the sources that seem credible and reading what I can understand .

  5. qetzal says:

    Derek Lowe at the excellent In The Pipeline blog also has a post on this.

  6. Sawyer says:

    @windriven
    “Is this state of publishing desirable or even sustainable as the rate of change in medical knowledge continues to accelerate?”

    It’s not ideal but it sure as hell beats the alternative. Every scientist and doctor will have their own list of ways they think the publication time can be reduced, and anything that can be done to shrink this delay from years to months is probably worth looking into. However, publication lag time is also one of the secret weapons that helps keep out the cranks and quacks. If reviewers are going to rush through your paper in a week and approve it there’s a good chance that they are going to miss the nonsense you’re trying to pass off as real science. Of course they can make the same mistakes in reviewing your work after it sits on their desk for a month, but the delay itself discourages people from submitting poor work.

  7. David Gorski says:

    Tangent, a little red meat for my SBM friends:
    http://thehealthcareblog.com/blog/2013/02/13/choosing-alternative-medicine/

    Crap. I know this guy. He’s an oncologist where I used to work.

  8. cervantes says:

    While Dr. G is correct that it often takes a long time and multiple submissions to get a paper published — even a really good one, as of course all of mine and Dr. G’s are — I wouldn’t agree with the overall tone that seems to say this is okay. Multiple submissions and revisions are inevitable of course — top tier and even medium tier journals get far more submissions than they can publish, and competition for desirable publication space is, at least in principle, an essential form of quality control. Not that reviewers are infallible, of course, and it can be frustratingly hard to get innovative ideas through their concrete skulls. That said, comments can be helpful, either because oh yeah, you’re actually right about that; or oh yeah, I can see how I need to explain this more clearly to dipshits like you. Either way, you have to take it as constructive.

    But . . .

    The turnaround from submission to review and whatever result — revise and resubmit or reject — is often much longer than it ought to be. It really clogs up the works, slows the progress of science, and of course our all-important ability to get the next grant. I’ve had reviews take as long as 6 months. There is no excuse for that.

  9. ConspicuousCarl says:

    Speaking of impatient researchers, here is Barry Marshall griping about not getting his Nobel prize within 2 years of writing a paper, while promoting a legend which many of us had fallen for:
    http://skepticzone.libsyn.com/the-skeptic-zone-139-18-june-2011

    He’s sort of kidding, but by the end of his rant it seems like maybe he isn’t.

    So that’s the time line for science not suppressed by the establishment. Published in one year, million dollar prize in two.

  10. lilady says:

    Here, from Emily Willingham’s Forbes blog (about mouse models and autistic behavior):

    http://www.forbes.com/sites/emilywillingham/2013/01/07/mouse-behavior-from-autism-studies-not-reproducible/

    “If the study in question is about mice, never talk about how the results will lead to a therapy or a cure or write about the mice as though somehow, they are just tiny humans with tails. Mice have misled us before. They are only a way to model what might happen in a mammal sorta kinda related to us. They are not Us, otherwise we’d live in tiny, crowded places, having 10 children at once and ignoring them when they grow fur, and this autism thing wouldn’t be an issue.”

  11. mousethatroared says:

    @lilady – ohhh, I like that Emily Willingham’s blog. gonna bookmark it.

  12. David Gorski says:

    While Dr. G is correct that it often takes a long time and multiple submissions to get a paper published — even a really good one, as of course all of mine and Dr. G’s are — I wouldn’t agree with the overall tone that seems to say this is okay.

    Um, no. Not quite. The overall tone was one of poking fun at the Gina Kolata and the researchers for implying that taking a year to get a paper published is so egregiously outside the norm that it’s evidence that Davis’ work must have been so much against scientific “dogma” that top tier journals wouldn’t touch it.

    I did a quickie PubMed search on him and noticed that he hasn’t published in Nature since 2005 or in Science since 2008. (I’d love to have a paper in either journal ever.) Maybe he expected too much. Maybe the paper isn’t as good as he thinks it is. (I haven’t read it yet, but will.) There are lots of reasons journals reject papers. A lot of times all it takes is one reviewer not to like it even if the other reviewers love it.

  13. Ed Whitney says:

    @ all and sundry:
    Young and Ioannidis http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2561077/pdf/pmed.0050201.pdf discussed, from the perspective of economics, how current publication practices may distort science. Pertinent to this thread is their mention of artificial scarcity when journals select articles for publication. The authors make this claim: “The venue of publication provides a valuable status signal. A common excuse for
    rejection is selectivity based on a limitation ironically irrelevant in the modern age—printed page space. This is essentially an example of artificial scarcity. Artificial scarcity refers to any situation where, even though a commodity exists in abundance, restrictions of access, distribution, or
    availability make it seem rare, and thus overpriced. Low acceptance rates create an illusion of exclusivity based on merit and more frenzied competition among scientists “selling” manuscripts…
    One solution to artificial scarcity—digital publication—is obvious and already employed. Digital platforms can facilitate the publication of greater numbers of appropriately peer reviewed
    manuscripts with reasonable hypotheses and sound methods.”

    The article is one of the few I have seen which considers perspectives which are the bread and butter of economists, but nearly unknown to clinicians and scientists. Printed page space in the pre-digital age was a matter of resource scarcity; perhaps in our time it is a matter of artificial scarcity.

  14. Angora Rabbit says:

    The whinging about Science and Nature made me laugh out loud as well. If I get a paper accepted on the first submission, I just about fall out of my shoes (which hurts since I like high heels). We’ve all got buckets of data that need to be written up or revised and resubmitted. I take the positive view that the rejections may give me useful feedback to improve the resubmission elsewhere. Mind it usually takes me a few days to get over the rejection itself! My most recent paper was accepted at PLoSOne after doing the rounds at Nature, Nat Neurosci and PLosMedicine. I was pretty happy but it took about 2yrs since we had to add some experiments and the grad student had already left the lab. This experience isn’t atypical, and it’s a far better paper for the effort.

    Ed raises some good points. But I don’t think there’s artificial scarcity given the plethora of journal emails I get begging for submissions. (Mind I think most of those are from crap journals like OMICS.) The real issue is everyone wants to be in the same few journals. What we need to ask is Why? In these days of PubMed, we can find pretty much everything published. Does it really matter what journal it is in? Apart from the EgoBoo. In my lab we call Nature “The Journal of Retracted Results” so how much EgoBoo is that?

  15. David Gorski says:

    If I get a paper accepted on the first submission, I just about fall out of my shoes (which hurts since I like high heels).

    I wouldn’t know the feeling, as, sadly, I’ve never had a paper accepted on the first submission. :-(

  16. Angora Rabbit says:

    Well, I wouldn’t say it happens often. :(

    A good friend with many years of editorship experience tells me that publishers encourage editors to reject manuscripts to help boost the journal’s ranking and thus the fees that can be charged to libraries. As he put it, “It’s the Editor’s job to turn manuscripts away.” He didn’t agree with this and nor do I. A good editor can build good submissions by arranging for good reviewers, for being widely read so she can locate good reviewers, and then reads the reviews rather than rubber stamping them. Sadly, this is an aberration, not the norm.

    I was recently offered an editorship, which I refused because, to be honest, I don’t know how they do the work plus run a lab plus teach. It is a thankless job. Maybe in a decade or two when I close my lab. At least what I can do now is give constructive feedback. But until study sections start looking at the data and not the publication list, I don’t know when this will change.

  17. qetzal says:

    Lets be fair. The NYT article doesn’t say the authors complained about the simple fact of being rejected by Science and Nature. Their complaint was that reviewers (supposedly) rejected the paper because they were sure “It [had] to be wrong,” even though they could not point to any actual flaws. I don’t think it’s unreasonable of the authors to complain about that at all. (Assuming their characterization is accurate, of course.)

  18. evilrobotxoxo says:

    @David Gorski: When I first got into science in the 90s, people seemed to consider PNAS a low first-tier journal, right below Science/Nature/Cell. Now I don’t even consider them second-tier, more like high third-tier. But I don’t feel sorry for them because they did it to themselves by allowing themselves to become the dumping ground, like you said.

    @Ed Whitney: low acceptance rates at high journals are not artificial scarcity. Page limits on published articles are artificial scarcity, particularly in Nature and Science, which publish only very short papers. However, when Nature or Science decides to publish a paper, they’re essentially putting their stamp of approval on it, and they can only “approve” as many papers as they think people are willing to read. The scarcity is not at the level of publishing space, but of attention spans of the readers. It’s like saying that David Letterman’s Top 10 is artificial scarcity because there’s no reason he couldn’t have a Top 100 or Top 1000.

  19. Narad says:

    A lot of times all it takes is one reviewer not to like it even if the other reviewers love it.

    As, of course, observed by ComradePhysioProf* some time ago. Back when I was following the Pincus ApEn stuff, I saw some truly ghastly mathematical typesetting in PNAS; I don’t know that I want to sort through it for the worst examples at this point.

    * If memory serves correctly; from DrugMonkey.

  20. Narad says:

    A good friend with many years of editorship experience tells me that publishers encourage editors to reject manuscripts to help boost the journal’s ranking and thus the fees that can be charged to libraries.

    This doesn’t make much sense to me. The real expenses that hit libraries are from publishing stables such as Elsevier, which are selling large, basically nonnegotiable packages, washing out individual journals. Individual societies, which are generally nonprofits in the first place, tend to try to keep costs down on both sides. My actual experience is with smaller operations, but one can take a look at the historical pricing summary (PDF) for the American Physical Society journals. These are falling prices with fairly constant acceptance rates.

  21. nybgrus says:

    Y’all should submit to The Journal of Universal Rejection

    The founding principle of the Journal of Universal Rejection (JofUR) is rejection. Universal rejection. That is to say, all submissions, regardless of quality, will be rejected. Despite that apparent drawback, here are a number of reasons you may choose to submit to the JofUR:

    You can send your manuscript here without suffering waves of anxiety regarding the eventual fate of your submission. You know with 100% certainty that it will not be accepted for publication.
    There are no page-fees.
    You may claim to have submitted to the most prestigious journal (judged by acceptance rate).
    The JofUR is one-of-a-kind. Merely submitting work to it may be considered a badge of honor.
    You retain complete rights to your work, and are free to resubmit to other journals even before our review process is complete.
    Decisions are often (though not always) rendered within hours of submission.

  22. nybgrus says:

    I wouldn’t know the feeling, as, sadly, I’ve never had a paper accepted on the first submission.

    I’ll find out in a bit if I can join the ranks of the rabbit and get my first paper published on my first submission.

    At least you are safe in your high heels for now, Dr. Gorski. I’ll wear flats for the next couple of months. :-p

  23. Draal says:

    I am a current postdoc and my experience with journals is from chemistry and biochemistry journals. As it was described for clinical medicine, I seem to have a different experience.
    ACS journals and the like now have fast turnaround policies. i would say 1 week to get a reply from one of the editors (either outright rejection of agreement to review), another week for the editor to find willing reviewers, and 2-4 weeks for the reviewers to make comments (reject or accept with revisions), one month to make corrections by the authors (if experiments are needed, must request more time from editor) and finally publish online. So one month is a hoped for turnaround but ~3 months is my anecdotal start to finish time frame. One year for the grad student or postdoc, who may have worked for 2-4 years to get enough research data, is excruciatingly long. NIH and NSF grants for postdocs are usually limited to 3 years, often less. Graduating and finding a a good job rely on publishing. Even for untenured profs, a one year delay contributes to publish or perish. Rejections to a manuscript are expected to come quickly so we can move on to a second journal. I have not submitted to Science or Nature; my experience is more likely to be true for PNAS in the fields of chemistry, biochemistry and engineering.
    Anything involving biology just tacks on time. My experimental time ranking: in vivo humans>animals>>in vitro: eukaryotes>prokaryotes>enzymes>organic chemistry

  24. Narad says:

    i would say 1 week to get a reply from one of the editors (either outright rejection of agreement to review), another week for the editor to find willing reviewers, and 2-4 weeks for the reviewers to make comments (reject or accept with revisions), one month to make corrections by the authors (if experiments are needed, must request more time from editor) and finally publish online. So one month is a hoped for turnaround but ~3 months is my anecdotal start to finish time frame.

    One issue here is that the production side has been squeezed nearly out of existence, but you’re still looking at an additional lead time even having effectively thrown copy editing overboard. Somebody’s got to size that art and so forth (supplementary material can be a nightmare), and you’ve got to turn around the proofs. In a quick look at the latest ApJL, which has a four-page maximum, the shortest turn from acceptance to on-line publication I found was 15 days, and that thing must have been as clean as a whistle (24 days from submission to acceptance).

  25. LabRat94 says:

    windriven: I hope you are being sarcastic…show me the properly controlled studies demonstrating efficacy of any homeopathic remedy, then let’s talk.

    As for the issue of publication delay, it’s something any published scientist experiences. What would greatly improve the situation is for more scientists to say “yes” when asked to review an article in their field. How many of you say no or delete those requests without even considering the impact on the authors trying to publish their work? If the pool of reviewers was bigger, and if those reviewers cooperated in getting their reviews completed on schedule, the system would run much more smoothly.

    Regarding the Seok et al PNAS paper and their conclusions: I am happy to see that others are also critical of this work and the popular exposure it has been given. I frankly think it was very irresponsible of the authors to make such sweeping statements given the limitations of their methodology. They only used a single mouse strain, and only young male mice. The strain they used, C57BL/6, is notoriously bad for the type of study they performed, and most or all of the authors should have known this at the outset. This mouse strain has a biased immune response; additional strains, including outbred strains, should also have been examined. Also, think about the differences in immune systems of mice and humans: the humans in the study have been out in the world, getting sick, building immunity, etc. The mice live in a sterile environment, even though mice have evolved to be more resistant to infection (given their native environment) than modern humans…so, the model was fairly flawed from the outset. Further, the authors examined total blood leukocytes without accounting for differences in the cell compositions. This introduces the potential for enormous error…if you don’t know which cell type you’re looking at, how can you conclude anything about differences in gene expression? Many of these authors are at the tops of their field, and have published probably hundreds of papers in aggregate using mouse models of inflammatory diseases…they almost had to have known going into this that they were biasing their analysis to amplify differences. Even their statistical analysis is flawed: who ever heard of a negative R2 value? They likely meant just “R” when describing “Pearson correlations” but repeatedly used the term “R2″ (R-squared)…how did the editors at the journal not even pick this up? Finally, the authors didn’t look at any proteins or even try to reproduce their microarray expression results using another RNA-based method. Microarrays have their limitations. Without confirming their expression data using other methods, or without looking to see what’s going on at the protein level, how can they be so confident in their conclusions?

    While the animal rights community (and maybe the CAM community too?) are probably salivating over this paper, it was very irresponsible of respected mainstream news organizations, like the NY Times, to permit the extrapolation that billions have been wasted on useless research because of this single article. Show me examples of new therapies or drugs for humans that did NOT involve any animal research! I’m not suggesting that validation of experimental models isn’t critical. However, the value of a good animal model should not be underestimated…it is the lack of these models that limits discovery of new therapeutics. The PNAS paper will certainly impact funding decisions for labs that use animal models, but perhaps it will mean more funding for clinical research…could that be the authors’ main goal all along? To get the bigger bucks it takes to do the clinical studies? Hmmm….

Comments are closed.