Articles

Does peer review need fixing?

One of the most important aspects of science is the publication of scientific results in peer-reviewed journals. This publication serves several purposes, the most important of which is to communicated experimental results to other scientists, allowing other scientists to replicate, build on, and in many cases find errors in the results. In the ideal situation, this communication results in the steady progress of science, as dubious results are discovered and sound results replicated and built upon. Of course, scientists being human and all, the actual process is far messier than that. In fact, it’s incredibly messy. Contrary to popular misconceptions about science, it doesn’t progress steadily and inevitably. Rather, it progresses in fits and starts, and most new scientific discoveries go through a varying period of uncertainty, with competing labs reporting conflicting results. To achieve consensus about a new theory can take relatively little time (for example, the less than a decade that it took for Marshall and Warren’s hypothesis that peptic ulcer disease is largely caused by H. pylori or the relatively rapid acceptance of Einstein’s Theory of Relativity) to much longer periods of time.

One of the pillars of science has traditionally been the peer review system. In this system, scientists submit their results to journals for publication in the form of manuscripts. Editors send these manuscripts out to other scientists to review them and decide if the science is sound, if the methods appropriate, and if the conclusions are justified by the data presented. This step of the process is very important, because if editors don’t choose reviewers with the appropriate expertise, then serious errors in review may occur. Also, if editors choose reviewers with biases so strong that they can’t be fair, then science that challenges such reviewers’ biases may never see print in their journals. The same thing can occur to grant applications. In the NIH, for instance, the scientists running study sections must be even more careful in choosing scientists to be on their study sections and review grant applications, not to mention picking which scientists review which grants. Biases in reviewing papers are one thing; biases in reviewing grant applications can result in the denial of funding to worthy projects in favor of projects less worthy that happen to correspond to the biases of the reviewers.

I’ve discussed peer review from time to time, although perhaps not as often as I should. My view tends to be that, to paraphrase Winston Churchill’s invocation of a famous quote about democracy, peer review is the worst way to weed out bad science and promote good science, except for all the others that have been tried. One thing’s for sure, if there’s a sine qua non of an anti-science crank, it’s that he will attack peer review relentlessly, as HIV/AIDS denialist Dean Esmay did. Indeed, in the case of Medical Hypotheses, the lack of peer review let the cranks run free to the point where even Elsevier couldn’t ignore it any more. One thing’s for sure. Peer review may have a lot of defects and blindnesses, but lack of peer review is even worse. It’s no wonder why cranks of all stripes loved Medical Hypotheses.

None of this means that the current system of peer review is sacrosanct or that it can’t be improved. In the 25 years or so I’ve been doing science, particularly in the 20 years since I began graduate school, I’ve periodically heard lamentations asking, “Is peer review broken?” or demanding that the peer review system be radically altered or even abolished. Usually they occur every two or three years, circle around scientific circles for a while, and then fade away, like the odor of a particularly stinky fart. It looks as though it’s time yet again, as a rather amusingly titled article in a recent issue of The Scientist, I Hate Your Paper: Many say the peer review system is broken. Here’s how some journals are trying to fix it:

Twenty years ago, David Kaplan of the Case Western Reserve University had a manuscript rejected, and with it came what he calls a “ridiculous” comment. “The comment was essentially that I should do an x-ray crystallography of the molecule before my study could be published,” he recalls, but the study was not about structure. The x-ray crystallography results, therefore, “had nothing to do with that,” he says. To him, the reviewer was making a completely unreasonable request to find an excuse to reject the paper.

Kaplan says these sorts of manuscript criticisms are a major problem with the current peer review system, particularly as it’s employed by higher-impact journals. Theoretically, peer review should “help [authors] make their manuscript better,” he says, but in reality, the cutthroat attitude that pervades the system results in ludicrous rejections for personal reasons—if the reviewer feels that the paper threatens his or her own research or contradicts his or her beliefs, for example—or simply for convenience, since top journals get too many submissions and it’s easier to just reject a paper than spend the time to improve it. Regardless of the motivation, the result is the same, and it’s a “problem,” Kaplan says, “that can very quickly become censorship.”

I daresay pretty much every scientist has submitted a paper (probably several) papers, only to have outrageously unreasonable reviewer comments returned to them similar to those Kaplan described above. I myself have experienced this phenomenon on multiple occasions. Most recently, it took me multiple submissions to four different journals to get a manuscript published. It took nearly a year and a half and more hours of writing and rewriting and doing more experiments than I can remember. But “censorship”? I’m half tempted ot respond to Dr. Kaplan: Censorship. You keep using that word. I do not think it means what you think it means. In fact, I just did.

No, incompetent or biased peer review is not “censorship.” It’s incompetent or biased peer review, and it’s a problem that needs to be dealt with wherever and whenever possible. As for “rejecting papers for convenience,” perhaps Dr. Kaplan could tell us what a journal editor should do when he or she gets so many submissions that it’s only possible to publish 10 oe 20% of them. Peer reviewers aren’t paid; with the proliferation of journals the appetite of the scientific literature for peer reviewers is insatiable. Moreover, reviewing manuscripts is hard work. That’s why higher impact journals not infrequently use a triage system, where the editor does a brief review of submitted manuscripts in order to determine whether it is appropriate for the journal or has any glaring deficiencies and then decides whether to send them out for peer review.

I have the same problem with another complaint in the article, that of Keith Yamamoto:

“It’s become adversarial,” agrees molecular biologist Keith Yamamoto of the University of California, San Francisco, who co-chaired the National Institutes of Health 2008 working group to revamp peer review at the agency. With the competition for shrinking funds and the ever-pervasive “publish or perish” mindset of science, “peer review has slipped into a situation in which reviewers seem to take the attitude that they are police, and if they find [a flaw in the paper], they can reject it from publication.”

He says that as though that were a bad thing. There is no inherent right to publish in the scientific literature, and papers with major flaws should be rejected. How major or numerous the flaws have to be to trigger rejection comes down to the policies of each peer reviewed journal. Don’t get me wrong. I’m not all Pollyannaish, thinking that our current peer review system is the best of all possible worlds. Improvement in the system can only be good for science, if true improvement it is, and there are some good suggestions for improving peer review in the article.

Perhaps the most pernicious problem in peer review is the problem of reviewers with a bias or an axe to grind. To attack this problem, some journals are trying to eliminate anonymous peer review. The idea is that, if everything is completely open and transparent, with the peer reviews being “part of the record,” so to speak. I can see the appeal of this change. A reviewer is less likely to “be a dick” if he or she knows that the review will be in the public record, for all to see, or at least that the manuscript authors know who the peer reviewers are. Personally, I have a problem with this, mainly because I think the downside of getting rid of reviewer anonymity outweigh the potential good side. For example, I rather suspect that a lot of reviewers would be reluctant to be too hard on the manuscripts submitted by big names in their field if they knew their names would be on the review. You don’t want to piss off the big Kahunas in your field. These are the people who organize conferences, invite outside speakers, and sit on study sections. In general, it’s not a good idea to get on their bad side, particularly if you’re still young and struggling to make a name for yourself in the field. For example, I’m a breast surgeon, and I know I would be reluctant to apply even deserved respectful insolence to a paper by, for example Monica Morrow or Armando Guliano (two very big names in the field) if I knew they knew who was reviewing their papers and even if the paper I was reviewing was obviously crap.

Personally, I like the idea expressed here:

Frontiers journals are trying to find a balance by maintaining reviewer anonymity throughout the review process, allowing reviewers to freely voice dissenting opinions, but once the paper is accepted for publication, their names are revealed and published with the article. “[It] adds another layer of quality control,” says cardiovascular physiologist George Billman of The Ohio State University, who serves on the editorial board of Frontiers in Physiology. “Personally, I’d be reluctant to sign off on anything that I did not feel was scientifically sound.”

As would I.

Another idea I’ve proposed before in debates about peer review is to go for full anonymity. In other words, reviewers are anonymous to the authors of manuscripts, and–here’s the change–the authors are anonymous to the reviewers. One advantage to such an approach is that it would tend to alleviate any effect of personal dislikes or even animosity, and it would “take the glow” off of big names submitting papers, hopefully making it less likely that reviewers would give a weak paper a pass because it came from a big name lab. On the other hand, in small fields, everyone knows what everyone else is doing; so anonymizing the manuscript authors would often not hide the identity of the authors.

The last two problems with peer review discussed by this paper are highly intertwined:

  • Peer review is too slow, affecting public health, grants, and credit for ideas
  • Too many papers to review

The first of the two problems above is largely a function of the last. As I pointed out above, the appetite of journals for peer reviewers is insatiable, and peer reviewers are not paid. They’re expected to do it out of the goodness of their hearts, because it’s service back to the community of science. True, peer review activity counts when it comes time to be considered for promotion and tenure, but it’s a lot of work for very little reward, not to mention articles like the one under discussion, in which seemingly no one can get it right. Oddly enough, there was one suggestion that I didn’t see anywhere in this article, and that’s to pay reviewers for their hard work. Apparently the financial model of journal publishing won’t support it.

Be that as it may, one solution to this proposed is to go to a model like that of PLoS ONE:

An alternative way to limit the influence of personal biases in peer review is to limit the power of the reviewers to reject a manuscript. “There are certain questions that are best asked before publication, and [then there are] questions that are best asked after publication,” says Binfield. At PLoS ONE, for example, the review process is void of any “subjective questions about impact or scope,” he says. “We’re literally using the peer review process to determine if the work is scientifically sound.” So, as long as the paper is judged to be “rigorous and properly reported,” Binfield says, the journal will accept it, regardless of its potential impact on the field, giving the journal a striking acceptance rate of about 70 percent.

“The peer review that matters is the peer review that happens after publication when the world decides [if] this is something that’s important,” says Smith. “It’s letting the market decide—the market of ideas.”

This approach has also proven successful, with PLoS ONE receiving their first ISI impact factor this June—an impressive 4.4, putting it in the top 25 percent of the Biology category. And with a 6-fold growth in publication volume since 2007, Binfield estimates that “in 2010, we will be the largest journal in the world.” Since its inception in December 2006, the online journal has received more than 12 million clicks and nearly 21,000 citations, according to ISI.

I realize that my experience is anecdotal, but among the worst reviewer experiences I ever had was submitting a manuscript to PLoS ONE. In my case, at least, the reviewers were every bit as brutal as any I have ever experienced, which I found odd, because after PLoS rejected my manuscript, I turned it around and reformatted with minor revisions for a different journal. I even got it accepted after one round of revisions to a “traditional” journal with an impact factor significantly higher than that of PLoS ONE. Maybe my experience was anomalous, but I don’t accept the argument that PLoS ONE represents the savior of anything or even that much of an improvement over traditional publication methods.

More intriguing is the concept of letting authors take their peer review with them when they resubmit their manuscript to a different journal after rejection. When I first heard of this concept, I was quite skeptical. After all, if your paper was rejected, chances are that the reviews probably weren’t that positive or that they were, at best, lukewarm. Personally, I can say unequivocally that after I’ve had a paper rejected from a journal, the last thing I want is to show the next journal to which I submit my manuscript the crappy reviews that I got the first time around. I want a fresh start; that’s why I resubmit the manuscript in the first place! Peer reviews from the journal that rejected my manuscript are not baggage I want to keep attached to the new manucript as I submit it to another journal.

In the end, peer review is the mainstay of scientific publishing. While it has a great deal of difficulty detecting fraud, it can usually detect gross methodological flaws in the science being reported, which is about all that can be expected. Post publication evaluation of such results by fellow scientists, after all, is what ultimately decides what science endures and what science ends up being forgotten. No one claims that the current system is perfect or even that it doesn’t have a lot of problems, some of them serious. However, the cries that “peer review is broken” strike me as a perennial complaint without that much substance. As scientists, we can and should do whatever is feasible to shore up the peer review process, and we shouldn’t be afraid of trying out new models of peer review, such as some of the models described in this article. Just don’t throw the baby out with the bathwater. Peer review may have significant problems, but it works surprisingly well, given its ad hoc nature, and it’s incumbent upon those who would overthrow it or radically alter it to show that the systems that vie to replace it would result in better science being published.

Posted in: Medical Academia, Science and Medicine

Leave a Comment (42) ↓

42 thoughts on “Does peer review need fixing?

  1. tcw says:

    Liked the post. Politicization of science has contributed to the current situation, which I would speculate just as important as getting research dollars for the pseudoscientist. That is, publish something on goojum root curing brain cancer and some senator just may quote the research on the floor of the senate. As you seem to allude to, this problem will not go away soon. But may I add that especially with lab coat lobbyists (I’m thinking of the embryonic stem cell crowd a few years back, and now I think the money should have gone to the adult stem cell crowd because it works). On one hand, white coat lobbyists have made their own bed and now have to lie in it.

  2. blue-pen says:

    Well at least we could start with the truth – all peer reviewed science should be required to come with a written warning “peer review currently means next to nothing so take no stock in the fact that this paper has been peer reviewed.” Science *must* tell the world the truth that peer review means nary anything, at least currently anyways, so start with correcting that world perception first, acknowledge that widely and vociferously first, like how a psychiatrist gets a mental patient to acknowledge first they are ill, and then work from there.

  3. Draal says:

    Thank you David for tackling this topic again. There are a few points I think you should have mentioned. Below is a copied from JACS but it’s applicable to the peer-review process as a whole. I’ve highlighted the information that was not addressed in this blog. Perhaps further discussion on the highlighted topics is warranted.

    “Peer Review. The Editors generally seek the advice of experts about manuscripts; however, manuscripts considered by the Editors to be inappropriate for JACS may be declined without review. The recommendations of reviewers are advisory to the Editors, who accept full responsibility for decisions about manuscripts. Final responsibility for acceptance or declination rests with the Editor.
    Authors are urged to suggest in the cover letter a minimum of six to eight persons competent to review the manuscript, at least half of whom must be from North America. An author may request that a certain person not be used as a reviewer. The request will generally be honored by the Editor handling the manuscript, unless the Editor feels this individual’s opinion, in conjunction with the opinions of other reviewers, is vital in the evaluation of the particular manuscript. Reviewer identities are confidential, and the names of reviewers will not be revealed to an author.
    Reviewers are asked to evaluate manuscripts on the scientific value of the work, the level of interest to the broad and diverse readership, the appropriateness of the literature citations, and the clarity and conciseness of the writing.”

    I’ve encountered all the highlighted topics personally in my short publishing career so I assume other authors have too. Those that are not intimately familiar with the peer-review process may not necessarily know of these guidelines.

  4. art malernee dvm says:

    Studies show except for technical editing peer review does not work. I think its because of way the brain has developed in order to survive. Why ask doctors to do something with a tool that studies show cannot accurately measure in the real world. Even the eyes and brain cannot produce a real image. Why not call peer review “scientific review” and publish the math, data, anything scientifically measured in the real world on the internet so it is available on your cell phone and keep the “art” of medicine out of medicine.

  5. art malernee dvm “I think its because of way the brain has developed in order to survive. Why ask doctors to do something with a tool that studies show cannot accurately measure in the real world.”

    Because I’d prefer my surgeon use his brain, flawed as it is?

    Not that I don’t appreciate the existential approach to publishing.

    Before I started commenting, I was concerned that I would be unable to type, what with all those measuring flaws. But I seem to be managing. Of course….It’s just an illusion.

  6. David Gorski “But “censorship”? I’m half tempted ot respond to Dr. Kaplan: Censorship. You keep using that word. I do not think it means what you think it means. In fact, I just did.”

    Thank you, I keep hearing about censorship these days and wonder at what point in history the first amendment went from protecting us from being thrown in jail for our speech or having our presses, newspapers and books confiscated (or burned) to guaranteeing us a publisher, job or air time (plus a pat on the head) regardless of the quality of our speech.

    Interesting article. Thanks for covering it.

  7. Scott says:

    @michele:

    The way I like to put it is, you have a right to speak freely. I, however, also have a right to ignore, heckle, and/or debunk your speech.

  8. art malernee dvm says:

    Because I’d prefer my surgeon use his brain, flawed as it is?>>>

    thats what produces a scar that denotes a unproved remedy

    “Prepublication peer review is faith based not evidence based”

    see

    Richard Smith: Scrap peer review and beware of “top journals”
    22 Mar, 10 | by julietwalker

    The neurologist and epidemiologist Cathie Sudlow has written a highly readable and important piece in the BMJ exposing Science magazine’s poor reporting of a paper on chronic fatigue syndrome, (1) but she reaches the wrong conclusions on how scientific publishing should change.

    For those of you who have missed the story, Science published a case control study in September that showed a strong link between chronic fatigue syndrome and xenotropic murine leukaemia virus-related virus (XMRV). (2) The study got wide publicity and was very encouraging to the many people who believe passionately that chronic fatigue syndrome has an infectious cause. Unfortunately, as Sudlow describes, the study lacked basic information on the selection of cases and controls, and, worse, Science has failed to publish E-letters from Sudlow and others asking for more information.

    In the meantime, three other studies have not found an association between chronic fatigue syndrome and XMRV. (3-5)

    To avoid such poor reporting in the future Sudlow urges strengthening the status quo—more and better prepublication peer review. Not only is she trying to close the stable door after the horse has bolted she has also failed to recognise the possibilities of the new Web 2.0 world. The time has come to move from a world of “filter then publish” to one of “publish then filter”—and it’s happening.

    Prepublication peer review is faith based not evidence based, and Sudlow’s story shows how it failed badly at Science. Her anecdote joins a mountain of evidence of the failures of peer review: it is slow, expensive, largely a lottery, poor at detecting errors and fraud, anti-innovatory, biased, and prone to abuse. (6 7) As two Cochrane reviews have shown, the upside is hard to demonstrate. (8 9) Yet people like Sudlow who are devotees of evidence persist in belief in peer review. Why?

    The world also seems unaware that it is scientifically dangerous to read only the “top journals”. As Neal Young and others have argued, the “top journals” publish the sexy stuff. (10) The unglamorous is published elsewhere or not at all, and yet the evidence comprises both the glamorous and the unglamorous.

    The naïve concept that the “top journals” publish the important stuff and the lesser journals the unimportant is simply false. People who do systematic reviews know this well. Anybody reading only the “top journals” receives a distorted view of the world—as this Science story illustrates. Unfortunately many people, including most journalists, do pay most attention to the “top journals.”

    So rather than bolster traditional peer review at “top journals,” we should abandon prepublication review and paying excessive attention to “top journals.” Instead, let people publish and let the world decide. This is ultimately what happens anyway in that what is published is digested with some of it absorbed into “what we know” and much of it never being cited and simply disappearing.

    Such a process would have worked better with the story that Sudlow tells. The initial study would have appeared–perhaps to a fanfare of publicity (as happened) or perhaps not. Critics would have immediately asked the questions that Sudlow asks. Instead of hiding behind Science’s skirts as has happened, the authors would have been obliged to provide answers. If they couldn’t, then the wise would disregard their work. Then follow up studies could be published rapidly.

    Unfortunately, unlike physicists, astronomers, and mathematicians, all of whom have long published in this way, biomedical researchers seem reluctant to publish without traditional prepublication peer review. In reality this is probably because of innate conservatism and the grip of the “top journals” who insist on prepublication review, but biomedical researchers often say “But our stuff is different from that of physicists in that it may scare ordinary people. A false story, for example, “Porridge causes cancer” can create havoc.”

    My answer to this objection is that this happens now. Much of what is published in journals is scientifically poor—as the Science article shows. Then, many studies are presented at scientific meetings without peer review, and scientists and their employers are increasingly likely to report their results through the mass media.

    In a world of “publish then filter” we would at least have the full paper to dissect, whereas reports in the media even if derived from scientific meetings, include insufficient information for critical appraisal.

    So I urge Sudlow, a thinking woman, to reflect further and begin to argue for something radical and new rather than more of the same.

    1. Sudlow C. Science, chronic fatigue syndrome, and me. BMJ 2010;340:c1260

    2. Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, et al. Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science 2009;326:585-9.

    3. Van Kuppeveld FJM, de Jong AS, Lanke KH, Verhaegh GW, Melchers WJG, Swanink CMA, et al. Prevalence of xenotropic murine leukaemia virus-related virus in patients with chronic fatigue syndrome in the Netherlands: retrospective analysis of samples from an established cohort. BMJ 2010;340:c1018.

    4. Erlwein O, Kaye S, McClure MO, Weber J, Willis G, Collier D, et al. Failure to detect the novel retrovirus XMRV in chronic fatigue syndrome. PLoS One 2010;5:e8519.

    5. Groom HC, Boucherit VC, Makinson K, Randal E, Baptista S, Hagan S, et al. Absence of xenotropic murine leukaemia virus-related virus in UK patients with chronic fatigue syndrome. Retrovirology 2010;7:10.

    6. Godlee F, Jefferson T. Peer Review in Health Sciences. 2nd ed. London: BMJ Books; 2003.

    7. Smith R. Peer review: A flawed process at the heart of science and journals. J R Soc Med 2006;99:178-182.

    8. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database of Systematic Reviews 2007, Issue 1. Art. No.: MR000016. DOI: 10.1002/14651858.MR000016.pub3

    9. Demicheli V, Di Pietrantonj C. Peer review for improving the quality of grant applications. Cochrane Database of Systematic Reviews 2007, Issue 1. Art. No.: MR000003. DOI: 10.1002/14651858.MR000003.pub2

    10. Young NS, Ioannidis JPA, Al-Ubaydli O, 2008 Why Current Publication Practices May Distort Science. PLoS Med 5(10): e201. doi:10.1371/journal.pmed.0050201

    Competing interest: RS is on the board of the Public Library of Science and an enthusiast for open access publishing, but he isn’t paid and doesn’t benefit financially from open access publishing.
    ********
    art malernee dvmon 08 Apr 2010 at 1:40 pm
    Your comment is awaiting moderation.
    Studies show that expert peer review can not be trusted
    What study? How could a study invalidate peer review? That does not make any sense>>>>
    Editorial peer review for improving the quality of reports of biomedical studies
    Jefferson T, Rudin M, Brodney Folse S, Davidoff F
    Summary
    Editorial peer review for improving the quality of reports of biomedical studies
    Editorial peer review is used world-wide as a tool to assess and improve the quality of submissions to paper and electronic biomedical journals. As the information revolution gathers pace, an empirically proven method of quality assurance is of paramount importance. The increasing availability of empirical research on the possible effects of peer review led us to carry out a review of current evidence on the efficacy of editorial peer review. We found few studies of reasonable quality, and most of these were concerned with the effects of blinding reviewers and/or authors to each others’ identity. We could not identify any methodologically convincing studies assessing the core effects of peer review. Major research is urgently needed.
    This is a Cochrane review abstract and plain language summary, prepared and maintained by The Cochrane Collaboration, currently published in The Cochrane Database of Systematic Reviews 2010 Issue 3, Copyright © 2010 The Cochrane Collaboration. Published by John Wiley and Sons, Ltd.. The full text of the review is available in The Cochrane Library (ISSN 1464-780X).
    This record should be cited as: Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database of Systematic Reviews 2007, Issue 2. Art. No.: MR000016. DOI: 10.1002/14651858.MR000016.pub3.
    This version first published online: October 20. 2003
    Last assessed as up-to-date: February 20. 2007
    Abstract
    Background
    Scientific findings must withstand critical review if they are to be accepted as valid, and editorial peer review (critique, effort to disprove) is an essential element of the scientific process. We review the evidence of the editorial peer-review process of original research studies submitted for paper or electronic publication in biomedical journals.
    Objectives
    To estimate the effect of processes in editorial peer review.
    Search strategy
    The following databases were searched to June 2004: CINAHL, Ovid, Cochrane Methodology Register, Dissertation abstracts, EMBASE, Evidence Based Medicine Reviews: ACP Journal Club, MEDLINE, PsycINFO, PubMed.
    Selection criteria
    We included prospective or retrospective comparative studies with two or more comparison groups, generated by random or other appropriate methods, and reporting original research, regardless of publication status. We hoped to find studies identifying good submissions on the basis of: importance of the topic dealt with, relevance of the topic to the journal, usefulness of the topic, soundness of methods, soundness of ethics, completeness and accuracy of reporting.
    Data collection and analysis
    Because of the diversity of study questions, viewpoints, methods, and outcomes, we carried out a descriptive review of included studies grouping them by broad study question.
    Main results
    We included 28 studies. We found no clear-cut evidence of effect of the well-researched practice of reviewer and/or author concealment on the outcome of the quality assessment process (9 studies). Checklists and other standardisation media have some evidence to support their use (2 studies). There is no evidence that referees’ training has any effect on the quality of the outcome (1 study). Different methods of communicating with reviewers and means of dissemination do not appear to have an effect on quality (3 studies). On the basis of one study, little can be said about the ability of the peer-review process to detect bias against unconventional drugs. Validity of peer review was tested by only one small study in a specialist area. Editorial peer review appears to make papers more readable and improve the general quality of reporting (2 studies), but the evidence for this has very limited generalisability.
    Authors’ conclusions
    At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research. However, the methodological problems in studying peer review are many and complex. At present, the absence of evidence on efficacy and effectiveness cannot be interpreted as evidence of their absence. A large, well-funded programme of research on the effects of editorial peer review should be urgently launched.

  9. Shelley says:

    One difficulty with peer review is that if you work in a particularly narrow area of your filed, there are maybe only a few other researchers (aka ‘the competition’) doing similar work. These individuals (your peers) will inevitably be asked by the journal to review your paper.

    While this sounds like a great idea (who would be more rigorous than the competition?), these may also be the individuals who would be the least happy to see their work challenged in any way. Consequently, you may have reviewers who are psychologically invested in (their own pet models and theories) rejection rather than in seeing the field move forward.

    Another problem with the review process has to do with the review of the statistical analysis. All things being equal, there are four major components that a good reviewer needs to examine: The introduction (does the research make sense given what we know so far); the method (was the study designed properly to answer the question); the analysis (statistics); and the conclusion (does the data support the author’s conclusions). My sense is that reviewers generally do the best job on the intro, method, and conclusions and the poorest job on the statistics.

    It has been my experience that both researchers and consumers of journal articles are often poor statisticians, and I’ve known a number of researchers who hand the statistical analysis of their work off to others. This means that when it comes to reviewing others’ work, this is the area that reviewers are often least competent to consider. This is also the section of the article that most consumers skim at best. (When was the last time anyone on SBM criticized the statistics in a published article?)

    This problem could be solved by having reviewers whose sole job is to consider the analysis of the data — but it may make a lengthy process much longer.

    (Several years ago, I submitted a paper that used a new statistical technique in the analysis. The technique was intended to overcome a well-known problem with certain types of data. The article was rejected by all reviewers on statistical grounds and I was asked to re-analyze the data using the older technique, which I knew to be improper. It was finally published with the correct technique but took more than a year longer in an already very, very long process.)

  10. WilliamLawrenceUtridge says:

    One problem a friend of mine had – they had a successful study submitted to one of the most prestigious journals in the field, and it was rejected. The rejection caused a loooooong delay while they prepared to submit it to another journal. In that time, another senior researcher produced a surprisingly similar paper, with a surprisingly similar method and conclusion.

    My friend was of the opinion that the senior researcher was a peer reviewer of his paper, and essentially stole the idea and rammed it through peer review.

    So yeah, theft may also be an issue. It’d be nice if we could come up with a less worst system, but I can’t think of one so far.

  11. Draal says:

    “My friend was of the opinion that the senior researcher was a peer reviewer of his paper, and essentially stole the idea and rammed it through peer review.”
    Perhaps, but think it’s more likely that someone was trying to prevent being scooped, by, er, counter-scooping.

  12. # art malernee dvm

    “thats what produces a scar that denotes a unproved remedy”

    It’s true. I had a professor that was working in his studio with massive pieces of sheet metal. There was an accident and a piece sheet metal cut his shoulder, almost amputating his arm. He was taken into surgery soon enough that they were able to save the arm and he has regained most of the use and strength. But, tragically, he does have a terrrible scar. He seldom goes shirtless in class, I hear. Terrible, those unproven remedies.

  13. art malernee dvm says:

    Terrible, those unproven remedies.>>> why not just use the parachute argument rather than an antedote?

    see

    http://www.bmj.com/cgi/content/full/333/7570/701

  14. art malernee dvm,

    If I understand you correctly, you are suggesting that scientific journals be abolished and that scientists should just individually publish their research directly on their blogs where it can be critiqued by the whole world.

    This makes no sense to me, because journals will just reappear. I can’t read all the research in the world every day, so I will look to a scientific research blog aggregator. If I am running such an aggregator, I will want to maintain credibility (not get caught aggregating plans for perpetual motion machines) by asking knowledgeable scientists in the fields to take a look and make sure only plausible research adding something net-new gets into my aggregator. This is different from a journal in that my research can be aggregated by infinitely many aggregators, unlike today’s paper journals where research is published only once (but the article can be referenced in infinitely many review articles). This would reduce the power of any given aggregator and reduce its ability to charge money for what it’s doing — and therefore its ability to pay qualified staff.

    Your comments about “prestigious” journals make little sense either. Most research just adds a little increment in a small field and is not of interest to journalists writing for the general public. The journals that journalists look to publish articles that are both credible and of general interest. Sure, it’s great if you can get published in one of the big ones because the whole world will see your work. But if the whole world doesn’t actually need to see your work you can publish in a more targeted journal where anyone who wants to see your work can find it.

    How would you go about abolishing scientific journals anyway? Who would have the power to do it? Why would they? If it takes someone with power to make them go away, presumably there are people who find them useful because they use them and pay for their services today.

  15. Always Curious says:

    My experience in the accounting world is that peer review means an accountant from another office physically stopping by the office. As part of the site visit, the record keeping, procedures, and other relevant activities are directly observed to ensure that they are in line with professional standards. The peer review is required every so many years (5?) as a requirement to being a member in certain professional organizations. Each member is asked to peer review other firms so everyone has a chance to be both reviewer and reviewed

    In the realm of science, it would be nice to see some of this happen–maybe in conjunction with publishing, maybe not. I know some labs that would benefit from an outside PI stopping by, observing, and commenting on what is happening. This would improve the standards across the field, and possibly hasten the publication review process for labs that are well known for high internal standards. This would be especially helpful if a double-blind publication review process isn’t implemented.

  16. anecdote or antidote?

    your article linked is about controversies regarding randomized controlled trials, where a parachute metaphor is a bit more relevant…

    My argument is more in the line of ‘just because the human brain is imperfect and shows flaws in perception, it does not follow that it is a useless tool or that peer review is useless.

    Also, I would ask, why the emphasis on ‘allowing scientists to publish their papers and raw data online for everyone to see without peer review’? No one is stopping anyone from putting whatever they want online. Unless you are in China you can access anything.

  17. oops, by last comment was a response to

    art malernee dvm
    “Terrible, those unproven remedies.>>> why not just use the parachute argument rather than an antedote?”

  18. Anti-Social Scientist says:

    I think you may be dismissing paying reviewers too quickly. In some economics journals you pay a bit ($100-$200) to submit a paper. Then reviewers get some of that money, contingent on getting a review in on time. This is on my mind because I sat down and completed a referee report over this weekend in order to get it in by the Monday deadline for payment.

    Even though the amounts aren’t that big they seem to create pretty good incentives to not sit on a report for ever. Of course, this only helps solve the lazy referee problem, not the malicious or unethical referee problem.

  19. Angora Rabbit says:

    I also read the articles in The Scientist last week and am pleased that the topic is being discussed here. I am in agreement with Keith Yamamoto and several of the other scientists interviewed. Peer-review of manuscripts in biology/medicine has changed significantly in the past 5 yrs and not for the better. I speak as someone with nearly 60 peer-reviewed publications, 20 yrs in the field, and on the editorial board of several journals. Dr. Gorski’s article touches on a few of the points. I’d like to raise several more:

    1. The growing emphasis on publishing in high-impact journals. Although the number of journals has grown, increased manuscripts are chasing the same set of high-impact journals. Take a look at the number of submissions at Nature over the past 10 yrs as an example. Little wonder that editors at these journals are feeling overwhelmed, and with good reason.

    2. The increased emphasis on Citation Rankings. These days one is forced to pay attention to the Impact Factor. Reviewers of my major NIH grant (funded, btw) said “Nice work but why aren’t you publishing at Higher Impact journals?” So guess what I’m doing as renewal time approaches – circulating 4 different manuscripts in successive rings of journals with IF>5, even though they’d be snapped up in my specialty journals with IFs in the 3-5 range. And this is not unique to my study section reviewers want it. This wastes my time and theirs, but the system is now set up to demand it.

    3. Editors are no longer widely read, with a few exceptions. It used to be that an editor could look at a m/s and the reviewer comments and reach an independent decision. Now, because the editors are often narrowly read, they must rely on the reviewers, who may or may not be someone with experience in your research, and run with their judgment. I am sorry that the editors are overworked, but at the same time *it is their job* to spend thoughtful time with each m/s and make a decision on it.

    4. Sex sells. That is, in order to get into a higher IF journal, one has to “sex up” the claims and importance of the manuscript to the editor. It’s not sufficient anymore to have good science – one has to have a finding and research impact that is “major” and will boost the journal’s IF further. And you wonder why people are overstating their data? The publication system now insists on it. God help you if you are researching a disease with a smaller impact, because the editors won’t consider it. I would argue this wouldn’t be necessary if the Editors were more widely read than they are. Instead, one has to do the legwork for the editor. I spend at least as much time writing the cover letter to get past the editorial filter as I do on the actual science. Sheesh.

    5. I love the idea of submitting manuscripts without the author affiliations. This would level the playing field and require the paper to reviewed on the basis of science, rather than “getting a break” because it is a big-name lab. We all know of cr*p papers in Nature that are only their because it is from So-and-So’s lab.

    6. I also like the idea of publishing on-line without peer-review and letting readers comment on the m/s. The physicists have done it this way for years and it seems to work for them.

    7. A colleague who has been an editor for decades pointed out to me last week, when we were talking about The Scientist articles, that the Editor’s job today is not to publish papers. It is to say “No” to submissions. That’s very sad.

    And if this sounds angry, you’re darn tootin’ it is. It is very sad to see how publication has changed. I fear for the younger generation as yet untenured – I don’t know how they’re going to succeed.

  20. trrll says:

    I also like the idea of paying reviewers. Just pass the cost on to the authors. Most of us are used to paying review fees and page charges, anyway, and besides, we can recoup the cost by doing some reviewing ourselves. Of course, it won’t be possible to pay reviewers anything like what their time is really worth worth, but most of us are used to giving seminars for a pittance honorarium anyway, and getting paid something is better than getting paid nothing.

    I think that even a modest payment will encourage reviewing, and will encourage reviewers to take the process more seriously, resulting in higher-quality reviews.

    I think that people are worrying too much about the “high-profile” journals. This is a small fraction of what gets published. Most people find new publications in their own field by literature search these days, not by reading high profile journals, anyway. We’re all happy to get into a high-profile journal, to be sure, but they review by a very different standard. The high-profile journals are a bit like news magazines, favoring work that is new or surprising (and therefore more likely to turn out to be wrong). The real workhouse journals are the second- and third-tier journals.

  21. rork says:

    I very much liked that last comment from Angora Rabbit, except that I see problems with #6 (reader’s comment on the paper) for biological sciences, partly because it is very hard to say “I think you are cheating” and then sign your name.

    I also liked the comment by Shelley, and there was news today where a group of statisticians in biological fields is asking that reviewers and editors start doing their jobs:
    http://journals.lww.com/oncology-times/blog/newestnews/pages/post.aspx?PostID=34

    A common conclusion of reviews I do is “it is impossible to review this paper adequately since the data are not available and the methods are incomprehensible or absent”, and that is also a standard complaint for published papers too.

  22. art malernee dvm says:

    6. I also like the idea of publishing on-line without peer-review and letting readers comment on the m/s. The physicists have done it this way for years and it seems to work for them.>>>>>

    Interesting how this issue is in the news today
    see
    Wikipedia Age Challenges Scholars’ Sacred Peer Review
    http://www.nytimes.com/2010/08/24/arts/24peer.html?_r=1&hp

  23. Jurjen S. says:

    Shelley wrote:

    My sense is that reviewers generally do the best job on the intro, method, and conclusions and the poorest job on the statistics.

    Question: if a reviewer sucks at statistics, how is he able to tell whether the data supports the paper’s conclusions? And, by extension, whether the research did what the authors said in the intro they intended to do?

  24. JMB says:

    I like the argument that physicists have advanced the progress of science by changing the publication/peer review process. An additional step that might be wise for medicine is having an online, formalized, open peer review process. Articles could be judged by a list of criteria, and comments could be directed towards the specific criteria. Registered reviewers with documented credentials could vote on each criteria. Statisticians could comment on experimental design and statistical methods, clinical scientists could comment on the assumptions about the disease process inherent in the experimental model, etc. When an electronically published article reached a sufficient number of votes, with a sufficient majority (like a random walk reaching a sufficient positive count), then it would be published in a print journal. The popular press would be cautioned to pay attention to the articles that pass scrutiny, not every article that is electronically published.

    If there is too much damage done by patients seizing prematurely on electronically published research before it passes scrutiny, then perhaps the internet review process could become more of a virtual private network process. In physics, I don’t think there is any serious damage if someone seizes on the published article that the position and momentum of an electron can be calculated with greater precision than the limit specified by the uncertainty principle. In medicine, there is evidence that premature release of a scientific article may have resulted in excess deaths of patients.

  25. JMB on the future of print journals: “When an electronically published article reached a sufficient number of votes, with a sufficient majority (like a random walk reaching a sufficient positive count), then it would be published in a print journal.”

    Why? Why would a print journal even exist in this scenario? What function is it filling?

  26. JMB says:

    Perhaps, sometime in the future all physicians will be computer savvy, but not yet. To reach the widest audience, a combination of print and electronic media is still needed.

  27. JMB, in your version of peer review a “journal” will exist only to make paper copies for those people who do not use computers. Why not just have a collaboration between the website where we all get to give points to the papers we like, and a copy shop in our neighbourhood? There’s no need for high-paid scientific or editorial staff to get involved when the copy shop can do the job.

    Everyone else will get their information earlier, online. In the future (presumably around the time that peer review is made obsolete) very few of the copy-shop-requiring folks will still be around. Will it really make sense to publish a scientific journal if your only customers are about to die?

  28. JMB says:

    @Alison

    I have a nephew who worked as a science editor for an industry publication, and he was not highly paid. Angora Rabbit may have an idea what the executive editorial staff is paid, but I doubt that it is much compared to Wall Street or Silicon Valley standards. Publishing companies will continue with the print so long as it is profitable. I would not worry about money being wasted in the print publications, profit motive will take care of that. However, I suspect the better known print publications will continue for sometime. There are some publications that receive support from governments, usually they have the name of the government in their name. I think most of the well known journals get enough profit from the advertisements. Angora Rabbit is in a better position to comment on that than I.

    I am not advocating doing away with peer review. I am just voicing agreement to some of the suggestions about how it can be improved. “Publish and then criticize” is a radical change, but does not necessarily negate the function of peer review. The real function of peer review is to identify an article as meeting the standards of the publication (whether it is scientifically valid). Acceptance of a scientific work as scientifically valid can be indicated by an organized polling system of people with acceptable scientific credentials.

    Will enough people with solid credentials participate in such a process? If academic promotion and tenure committees consider such work as merit, then many academics would participate without direct reimbursement. Graduate students in biostatistics would probably have enough education to critique statistical methods in such electronically published works of medical science.

  29. JMB says:

    @Allison,
    In the recent graduates, I still see a fraction that are reluctant to use electronic communication, it’s not just a matter of the old dinosaurs passing away.

  30. JMB, in the present, journals have an editorial function and they charge money to access their articles.

    In the system you propose the editorial function would disappear and “journals” would exist only to produce paper copy. You propose a rollover where once a publicly available document accumulates a certain number of points it moves along the assembly line and is dropped passively into a journal which prints it to make it available to non-computer-users.

    Lets say that 20% of doctors and scientists don’t use computers at all.

    Will the NEJM continue to exist for the sole purpose of printing documents currently already available to everyone, so that they can distribute the printed copy to the 20% who don’t use computers? Or will it have another function? Because if the NEJM has no more value-add than Kinko’s I suspect it will disappear.

    Right now the NEJM

  31. [ahem, sorry about that]

    Right now the NEJM gets money from 100% of the readers of its articles. It selects, reviews and edits articles that it thinks will be edifying for its public, and it publishes and distributes the articles electronically and on paper.

    You are proposing a system where it will get money from only 20% of the readers of its articles and will add no value beyond moving an existing electronic document onto paper.

    This is not the same meaning of “journal.”

  32. JMB says:

    @Alison
    It is not my proposal that publishing be done electronically without prior peer review. It has been used in the physics scientific community as others have commented here. I was just suggesting a peer review mechanism to establish peer review function after the publication. A tested mechanism for peer review already exists from wikipedia. I am not very familiar with how it works in physics or wikipedia. So I am not sure if there is anything unique about my suggestion.

    It is a radical change that would force a new meaning for what a journal is. There are less radical proposals for strengthening the peer review process as it now exists. I doubt that the medical community is ready to accept such radical change. I think Dr Gorski has stated the more mainstream thinking of the medical science community in his post.

    I am not sure how much income from publishing medical journals derives from subscription fees versus advertising. That would vary based on the editorial policy on advertising. The stricter the editorial policy, the more the income would depend on subscription charges (either electronic or print).

  33. JMB”An additional step that might be wise for medicine is having an online, formalized, open peer review process. Articles could be judged by a list of criteria, and comments could be directed towards the specific criteria. Registered reviewers with documented credentials could vote on each criteria. Statisticians could comment on experimental design and statistical methods, clinical scientists could comment on the assumptions about the disease process inherent in the experimental model, etc. When an electronically published article reached a sufficient number of votes, with a sufficient majority (like a random walk reaching a sufficient positive count), then it would be published in a print journal.”

    JMB – from a web perspective that quite a nice concept. Now you just need someone to build/host/advertise.

  34. I wonder how such a site would be monetized, at least to the point where it is self-supporting?

  35. “When an electronically published article reached a sufficient number of votes, with a sufficient majority (like a random walk reaching a sufficient positive count), then it would be published in a print journal.”

    “It is not my proposal that publishing be done electronically without prior peer review.”

    Ok, then I guess I don’t understand how your proposal works. Is it this?

    1) I decide I want to publish in Nature.
    2) I post my article on the Nature open-source peer-review site.
    3) It sits their until either
    a) it gets enough points that Nature chooses it for its print edition or
    b) I get tired of waiting and move it to the Medical Hypotheses open-source peer review site instead.

    This would make money for Nature and Medical Hypotheses because the journals could charge money to look at the articles in their electronic queues, as well as for the articles they committed to paper.

    Or maybe it’s this?
    1) I decide I want to publish in Nature.
    2) Nature’s editorial staff review my article and either reject it or send it out for peer review in the traditional way.
    3) Once the article has been traditionally peer-reviewed, Nature posts my article on their open-source peer-review site.
    4) It sits there until either
    a) it gets enough points that Nature chooses it for its print edition or
    b) I start kicking myself because I’ve committed to Nature but it looks like I’ll never be officially published.

    This is like the current journal model but with two phases of peer review instead of one.

    Because I thought it was this:

    1) I decide I am ready to publish.
    2) I post my article on the medical open-source peer-review site.
    3) It sits there until it collects enough points that a journal decides to print it on paper – by which point at least 80% of the journal’s potential readers have already had access to it for months.

    In this version the journals can only charge money for the articles they commit to paper, and even these articles are no longer breaking news. If I were Nature I don’t see how this would be a very appealing economic model for me.

  36. Scott says:

    I also like the idea of publishing on-line without peer-review and letting readers comment on the m/s. The physicists have done it this way for years and it seems to work for them.

    Not really the whole story. Anything published online that hasn’t at least been submitted to a peer-reviewed journal is typically given little weight. And if it’s been too long since being published online without actually appearing in such a journal, it gets put into the same category. So the traditional peer review process is still very important.

    That’s been my experience in high-energy, at least – I can’t speak with confidence about other subfields of physics. But the idea some people here seem to have about how the online publishing works is, shall we say, overly idealistic.

  37. art malernee dvm says:

    In medicine, there is evidence that premature release of a scientific article may have resulted in excess deaths of patients.>>>

    what kind of evidence? Wouldn’t a measurement of the risk of death from premature published scientific articles depend more on where it was published and how “sexy” the article is. There are plenty of scientific articles published in veterinary journals that conclude you need to worm your dog with an arsenic wormer if it has worms. If those articles were published then reviewed on the internet I bet they would result in less dog dying then the current system where these arsenic wormer articles are published in our veterinary journals. Thats because people could point out that studies that show the dogs are better off letting the worms die on their own then trying to kill them with arsenic. You cannot find that out reading the articles in a journal that has been reviewed then published.

  38. JMB says:

    @Alison
    I would modify the proposed adoption of the physics paradigm of open electronic publishing with a method that might strengthen the traditional peer review process. An open peer review process following open electronic publication could replace the closed peer review process of the established journals. Once an article has passed the open peer review process, it could be submitted to the more established print journals for editorial review. The editorial review process may still reject the paper, but there would be no need for further peer review. The editors would have fewer papers to review. An incentive to peer review can be provided if academic promotions and tenure committees would consider such comments in peer review as merits for promotion.

    In regards to financing, it could be financed by selling advertising space on the web pages of the open review process. However, I would think that would be an ideal project for government finance. After all, we have the internet because the government financed the internet originally to facilitate physics research.

    @art
    I was looking for an online reference to the textbook that discussed the history, but then I found out that some of the same authors are still engaged in the argument. Rather than point my finger I would just give an example of the costs of being wrong. The recent USPSTF recommendations for screening mammography are for biennial screening beginning at age 50. In the supporting documents for the most current recommendations the specific results of an example computer model is given.

    http://www.uspreventiveservicestaskforce.org/uspstf09/breastcancer/brcanart.pdf

    From Table 4, annual breast cancer screening from age 40 to 69 is computed to save 8.3 cancer deaths per 1000 women. Also from table 4, biennial screening from age 50 to 69 is computed to save 5.4 cancer deaths per 1000 women. Now multiply 8.3/1000 by the number of women between age 40 and 69 in the USA, and 5.4/1000 by the number of women between 50 and 69. Take the difference in lives saved, and multiply by an estimate of compliance. The number you are looking at is the number of lives potentially saved by the ACS screening strategy versus the USPSTF screening strategy, over an average lifetime. To estimate the number of additional lives saved by the ACS strategy every year, take the current average life expectancy, subtract 40, then divide that number into the figure calculated above. You can pick the compliance rate, the census estimate of current number of women in the USA between age 40, 50, and 69. The USPSTF has provided the estimates of breast cancer deaths averted in their supporting article.

    Someone’s right, someone’s wrong. We are talking about a large number of lives. Of course, it looks like a large number in my perspective, perhaps not from other perspectives.

  39. Werdna says:

    “If those articles were published then reviewed on the internet I bet they would result in less dog dying then the current system where these arsenic wormer articles are published in our veterinary journals.”

    Yes, yes the internet fixes everything…I’ve heard that before. But it seems to be a big leap of faith. Personally from my limited experience with things like Wikipedia I’d say that having a lot of eyes doesn’t really help when highly specialized knowledge and critical thinking need to be combined.

    Oh and “Web 2.0″ has little to do with this discussion.

  40. pmoran says:

    JMB “Someone’s right, someone’s wrong.”

    Not necessarily. Screening programs are shaped by how much society is prepared to pay per life saved, and by the harm done to some by, and the additional costs, of investigation of false positive findings. Worse, there is some evidence for over-diagnosis and overtreatment from breast screening i.e.some screen-detected cancers may not be life threatening.

    All of these are increased as you lower the age at which screening is started. There is no clearly correct answer, especially if the screening is at taxpayer expense. In private, as oppposed to centrally controlled systems of screening, the effects of all these factors will probably be exaggerated because of less strict quality control in many of the areas involved.

  41. David wrote:

    Another idea I’ve proposed before in debates about peer review is to go for full anonymity. In other words, reviewers are anonymous to the authors of manuscripts, and–here’s the change–the authors are anonymous to the reviewers.

    I recently reviewed a draft for a journal that practices such full anonymity. It was the first time I’d seen that, and in a way I found it refreshing. I liked that I was expected to comment exclusively on the material itself; I could not even be tempted by the ad hominem. Looked at from the author’s point of view, this is clearly preferable to the norm.

    On the other hand, I admit (with next-to-no sheepishness) to having been a little disturbed by not knowing at least some aspects of the authors’ identities. The reason—you’ll have to accept this vague assertion without proof, in order that I maintain confidentiality—is that there was a reasonably high probability that at least one of the authors was a member of a pseudoscientific cult. I have found it particularly galling, in the past few years, to see such individuals occasionally included among many other names in multi-authored articles, inevitably followed shortly thereafter by the trumpeting of such ‘authorship’ as a validation of the cult itself and of everything it stands for.

    Yes, if I’d known that one or more authors were representatives of said cult, I’d have been more inclined to ding that draft from the gitgo, and for that I make no apologies.

    I can think of a couple of other examples in which I’d want to know the authors’ identities. There is a well-published author, sometimes cited right here on SBM, who launched his academic career by faking a higher degree. If I were asked to review one of his drafts I’d squash it every time (just as I take his published articles with a grain of salt), and I’d explain why. There are occasional authors, such as Scott Reuben and Marc Hauser, whose scientific misconduct had been suspected by peers well before it was exposed. If I were one of those peers, I’d want to know who the authors were.

    Still, all in all, I reckon if the reviewers are anonymous, so should be the authors.

  42. JMB says:

    @pmoran
    I agree with you that the decision of either an individual to undergo screening mammograpy, or a healthcare system to financially support screening mammgraphy is a value judgement and not a simple, you’re right or wrong. Individuals should have their own choice based on their own values. Healthcare systems have to decide how to allocate resources.

    What I was referring to was a passage reported in a textbook about a disagreement between scientists that occurred in 1975 that initially resulted in steps to improve the radiation exposure from mammograms in the community setting. Scientific opinions were reported in the popular press in 1976 that resulted in a mammogram radiation scare, and subsequent decrease in the utilization of screening mammography (in spite of the observation that 7% of community facilities were guilty of using excess radtiation, not 100%).

    The following URL is the reference for the above paragraph. I am not sure this URL will work, but here it is from google books,

    http://books.google.com/books?id=keTzVe3lkMUC&pg=PA21&lpg=PA21&dq=mammogram+radiation+scare+1975&source=bl&ots=C0TVDVF3NR&sig=s3H9dyJY-OM2nhJvz_ZcSokq3H0&hl=en&ei=qDN3TLjlFoK0lQeys7XrCw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBIQ6AEwAA#v=onepage&q&f=false

    The referenced textbook is, “Diagnosis of diseases of the breast”
    By Lawrence Wayne Bassett
    from Elsevier Health Services

    http://books.google.com/url?client=ca-print-elsevier_health_sciences_us&format=googleprint&num=0&channel=BTB-ca-print-elsevier_health_sciences_us+BTB-ISBN:0721695639&q=http://www.elsevierhealth.com/title.cfm%3FISBN%3D9780721695631&usg=AFQjCNF6FwaGzqNnU6ragUdfsFJEtlzCQA&source=gbs_buy_s&cad=0

    Here is a reference to an analysis of yearly trends in breast cancer mortality in the US,

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1381049/pdf/amjph00504-0065.pdf

    This article discusses both the trends and possible contributing factors to the trends.
    “Recent trends in breast cancer mortality among white and black US women.”
    F Chevarley and E White

    This article notes that changes in breast cancer mortality following changes in utilization of screening mammograpy will typically be delayed by 5 to 10 years. The article does not address changes in mammogram utilization rates occurring between 1970, 1976, and 1987. It does note that mammogram usage before 1984 was rare.

    So if the textbook is correct that there was a significant drop in the utilization of mammograpy after publication of headlines in the popular press that mammograpy radiation causes more breast cancer than it helps cure, then the next question is whether the observed increase in breast cancer mortality that occurred between 1980 and 1988 was due to the mammogram radiation scare. Many of the responses of radiologists and oncologists that emphasized the observed decline of breast cancer mortality rates in arguing against the USPSTF’s recommendations in 2009 were an oblique reference to what happened in the past, IMHO. So you can characterize the response as over dramatic, or, oh no, here we go again.

    There is a yes or no answer to the question of whether the dramatic drop in mammogram utilization rates between 1977 and 1984 was responsible for a1% annual increase in the breast cancer mortality rate between 1980 to 1988 (an increase of 1000′s of breast cancer deaths over the decade). However, we may not have a scientifically valid answer yet because of the influences on population data are diverse. There is some unresolved argument about whether that was the case. The principle scientists are still around, and some have made press appearances, many are in a position to have influenced the USPSTF. If the Stanford model of breast cancer mortality rates and screening utilization is accurate in its predictions about changes in breast cancer mortality rates, and those screening recommendations are adhered to, we may know who was right and who was wrong in 5 to 10 years.

    I will be a boor and grind my axe again, but I think it is wrong for the US healthcare system to waste money on woo that won’t save a single life, but be unwilling to spend the money to save 1 in 1904 women aged 40 to 50. The woman can decide whether the small chance of benefit is worth the risk. Now if the US healthcare system decides that money won’t be spent on woo, but saving the life of 1 in 1904 is too expensive, then I lose my argument (but not my vote). Of course, women are guaranteed access to screening mammography between age 40 and 50 by an amendment to the original bill, but will they have access to yearly exams, or will they sneak in the biennial limitation to save a lot of money?

    My apologies for hijacking the thread, once again.

Comments are closed.