NB: This is a partial posting; I was up all night ‘on-call’ and too tired to continue. I’ll post the rest of the essay later…
This is the fourth and final part of a series-within-a-series* inspired by statistician Steve Simon. Professor Simon had challenged the view, held by several bloggers here at SBM, that Evidence-Based Medicine (EBM) has been mostly inadequate to the task of reaching definitive conclusions about highly implausible medical claims. In Part I, I reiterated a fundamental problem with EBM, reflected in its Levels of Evidence scheme, that although it correctly recognizes basic science and other pre-clinical evidence as insufficient bases for introducing novel treatments into practice, it fails to acknowledge that they are necessary bases. I explained the difference between “plausibility” and “knowing the mechanism.”
I showed, with several examples, that in the EBM lexicon the word “evidence” refers almost exclusively to the results of clinical trials: thus, when faced with equivocal or no clinical trials of some highly implausible claim, EBM practitioners typically declare that there is “not enough evidence” to either accept or reject the claim, and call for more trials—although in many cases there is abundant evidence, other than clinical trials, that conclusively refutes the claim. I rejected Prof. Simon’s assertion that we at SBM want to “give (EBM) a new label,” making the point that we only want it to live up to its current label by considering all the evidence. I doubted Prof. Simon’s contention that “people within EBM (are) working both formally and informally to replace the rigid hierarchy with something that places each research study in context.”
In Part II I responded to the widely held assertion, also held by Prof. Simon, that there is “societal value in testing (highly implausible) therapies that are in wide use.” I made it clear that I don’t oppose simple tests of basic claims, such as the Emily Rosa experiment, but I noted that EBM reviewers, including those employed by the Cochrane Collaboration, typically ignore such tests. I wrote that I oppose large efficacy trials and public funding of such trials. I argued that the popularity gambit has resulted in human subjects being exposed to dangerous and unethical trials, and I quoted language from ethics treatises specifically contradicting the assertion that popularity justifies such trials. Finally, I showed that the alleged popularity of most “CAM” methods—as irrelevant as it may be to the question of human studies ethics—has been greatly exaggerated.