One of the most important aspects of science is the publication of scientific results in peer-reviewed journals. This publication serves several purposes, the most important of which is to communicated experimental results to other scientists, allowing other scientists to replicate, build on, and in many cases find errors in the results. In the ideal situation, this communication results in the steady progress of science, as dubious results are discovered and sound results replicated and built upon. Of course, scientists being human and all, the actual process is far messier than that. In fact, it’s incredibly messy. Contrary to popular misconceptions about science, it doesn’t progress steadily and inevitably. Rather, it progresses in fits and starts, and most new scientific discoveries go through a varying period of uncertainty, with competing labs reporting conflicting results. To achieve consensus about a new theory can take relatively little time (for example, the less than a decade that it took for Marshall and Warren’s hypothesis that peptic ulcer disease is largely caused by H. pylori or the relatively rapid acceptance of Einstein’s Theory of Relativity) to much longer periods of time.
One of the pillars of science has traditionally been the peer review system. In this system, scientists submit their results to journals for publication in the form of manuscripts. Editors send these manuscripts out to other scientists to review them and decide if the science is sound, if the methods appropriate, and if the conclusions are justified by the data presented. This step of the process is very important, because if editors don’t choose reviewers with the appropriate expertise, then serious errors in review may occur. Also, if editors choose reviewers with biases so strong that they can’t be fair, then science that challenges such reviewers’ biases may never see print in their journals. The same thing can occur to grant applications. In the NIH, for instance, the scientists running study sections must be even more careful in choosing scientists to be on their study sections and review grant applications, not to mention picking which scientists review which grants. Biases in reviewing papers are one thing; biases in reviewing grant applications can result in the denial of funding to worthy projects in favor of projects less worthy that happen to correspond to the biases of the reviewers.
I’ve discussed peer review from time to time, although perhaps not as often as I should. My view tends to be that, to paraphrase Winston Churchill’s invocation of a famous quote about democracy, peer review is the worst way to weed out bad science and promote good science, except for all the others that have been tried. One thing’s for sure, if there’s a sine qua non of an anti-science crank, it’s that he will attack peer review relentlessly, as HIV/AIDS denialist Dean Esmay did. Indeed, in the case of Medical Hypotheses, the lack of peer review let the cranks run free to the point where even Elsevier couldn’t ignore it any more. One thing’s for sure. Peer review may have a lot of defects and blindnesses, but lack of peer review is even worse. It’s no wonder why cranks of all stripes loved Medical Hypotheses.