I realize that in the question-and-answer session after my talk at the Lorne Trottier Public Science Symposium a week ago I suggested in response to a man named Leon Maliniak, who monopolized the first part of what was already a too-brief Q&A session by expounding on the supposed genius of Royal Rife, that I would be doing a post about the Rife Machine soon. And so I probably will; such a post is long overdue at this blog, and I’m surprised that no one’s done one after nearly three years. However, as I arrived back home in the Detroit area Tuesday evening, I was greeted by an article that, I believe, requires a timely response. (No, it wasn’t this article, although responding to it might be amusing even though it’s a rant against me based on a post that is two and a half years old.) Rather, this time around, the article is in the most recent issue of The Atlantic and on the surface appears to be yet another indictment of science-based medicine, this time in the form of a hagiography of Greek researcher John Ioannidis. The article, trumpeted by Tara Parker-Pope, comes under the heading of “Brave Thinkers” and is entitled Lies, Damned Lies, and Medical Science. It is being promoted in news stories like this, where the story is spun as indicating that medical science is so flawed that even the cell-phone cancer data can’t be trusted:
Let me mention two things before I delve into the meat of the article. First, these days I’m not nearly as enamored of The Atlantic as I used to be. I was a long-time subscriber (at least 20 years) until last fall, when The Atlantic published an article so egregiously bad on the H1N1 vaccine that our very own Mark Crislip decided to annotate it in his own inimitable fashion. That article was so awful that I decided not to renew my subscription; it is to my shame that I didn’t find the time to write a letter to The Atlantic explaining why. Fortunately, this article isn’t as bad (it’s a mixed bag, actually, making some good points and then undermining some of them by overreaching), although it does lay on the praise for Ioannidis and the attacks on SBM a bit thick. Be that as it may, clearly The Atlantic has developed a penchant for “brave maverick doctors” and using them to cast doubt on science-based medicine. Second, I actually happen to love John Ioannidis’ work, so much so that I’ve written about it at least twice over the last three years, including The life cycle of translational research and Does popularity lead to unreliability in scientific research?, where I introduced the topic using Ioannidis’ work. Indeed, I find nothing at all threatening to me as an advocate of science-based medicine in Ioannidis’ two most famous papers, Contradicted and Initially Stronger Effects in Highly Cited Clinical Research and Why Most Published Research Findings Are False. The conclusions of these papers to me are akin to concluding that water is wet and everybody dies. It is, however, quite good that Ioannidis is there to spell out these difficulties with SBM, because he tries to keep us honest.
A new study published in PLOS Biology looks at the potential magnitude and effect of publication bias in animal trials. Essentially, the authors conclude that there is a significant file-drawer effect – failure to publish negative studies – with animal studies and this impacts the translation of animal research to human clinical trials.
SBM is greatly concerned with the technology of medical science. On one level, the methods of individual studies need to be closely analyzed for rigor and bias. But we also go to great pains to dispel the myth that individual studies can tell us much about the practice of medicine.
Reliable conclusions come from interpreting the literature as a whole, and not just individual studies. Further, the whole of the literature is greater than the sum of individual studies – there are patterns and effects in the literature itself that need to be considered.
Note: The reason that I am posting today rather than my usual Monday slot is because the article I discuss here was embargoed until last night. Consequently, I asked Harriet if she would trade days with me this week, and she was kind enough to do so.
One thing that science relies on almost absolutely is transparency. Because one of the most important aspects of science is the testing of new results by other investigators to see if they hold up, the diligent recording of scientific results is critical, but even more important is the publication of results. Indeed, the most important peer review is not the peer review that occurs before publication. After all, that peer review usually consists of an editor and anywhere from one to four peer reviewers on average. Most articles that I have published were reviewed by two or three reviewers. No, the most important peer review is what occurs after a scientist’s results are published. Then, all interested scientists in the field who read the article can look for any weakness in methodology, data analysis, or interpretations. They can also attempt to replicate it, usually as a prelude to trying to build on it.
Arguably nowhere is this transparency quite as critical as in the world of clinical trials. The reason is that medications are approved on the basis of these trials; physicians choose treatments; and different medications become accepted as the standard of care. Physicians rely on these trials, as do regulatory bodies. Moreover, there is also the issue of publication bias. It is known that “positive” trials, trials in which the study medication or treatment is found to be either efficacious compared to a placebo or more efficacious than the older drug or treatment it is to replace, are more likely to be published. That is why, more and more, steps are being taken to assure that all clinical trial results are made publicly available. For example, federal law requires that all federally-funded clinical trials be registered at ClinicalTrials.gov at their inception, and peer-reviewed journals will not publish the results of a clinical trial if it hasn’t been registered there. Also, beginning September 27, 2008, the US Food and Drug Administration Amendments Act of 2007 (FDAAA) will require that clinical trials results be made publicly available on the Internet through an expanded “registry and results data bank,” described thusly. Under FDAAA, enrollment and outcomes data from trials of drugs, biologics, and devices (excluding phase I trials) must appear in an open repository associated with the trial’s registration, generally within a year of the trial’s completion, whether or not these results have been published. Although there are some practical issues over this law, for example determining how much information can be disseminated this way without constituting prior publication, which is normally a reason to disqualify a manuscript from publication.