Background: the distinction between EBM and SBM
An important theme on the Science-Based Medicine blog, and the very reason for its name, has been its emphasis on examining all the evidence—not merely the results of clinical trials—for various claims, particularly for those that are implausible. We’ve discussed the distinction between Science-Based Medicine (SBM) and the more limited Evidence-Based Medicine (EBM) several times, for example here (I began my own discussion here and added a bit of formality here, here, and here). Let me summarize by quoting John Ioannidis:
…the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance.
EBM, in a nutshell, ignores prior probability† (unless there is no other available evidence) and falls for the “p-value fallacy”; SBM does not. Please don’t bicker about this if you haven’t read the links above and some of their own references, particularly the EBM Levels of Evidence scheme and two articles by Steven Goodman (here and here). Also, note that it is not necessary to agree with Ioannidis that “most published research findings are false” to agree with his assertion, quoted above, about what determines the probability that a research finding is true.
The distinction between SBM and EBM has important implications for medical practice ethics, research ethics, human subject protections, allocation of scarce resources, epistemology in health care, public perceptions of medical knowledge and of the health professions, and more. EBM, as practiced in the 20 years of its formal existence, is poorly equipped to evaluate implausible claims because it fails to acknowledge that even if scientific plausibility is not sufficient to establish the validity of a new treatment, it is necessary for doing so.
Thus, in their recent foray into applying the tools of EBM to implausible health claims, government and academic investigators have made at least two, serious mistakes: first, they have subjected unwary subjects to dangerous but unnecessary trials in a quest for “evidence,” failing to realize that definitive evidence already exists; second, they have been largely incapable of pronouncing ineffective methods ineffective. At best, even after conducting predictably disconfirming trials of vanishingly unlikely claims, they have declared such methods merely “unproven,” almost always urging “further research.” That may be the proper EBM response, but it is a far cry from the reality. As I opined a couple of years ago, the founders of the EBM movement apparently “never saw ‘CAM’ coming.”