A point I make over and over again when talking about new or alternative therapies that are not supported by good clinical trial evidence is that lower-level evidence, such as theoretical justifications, anecdotes, and pre-clinical research like in vitro studies and animal model testing, can only be suggestive, never reliable proof of safety or efficacy. It is necessary to begin evaluating a new therapy that does not yet have clinical evidence to support it by showing a plausible theory for why it might work and then moving on to demonstrate that it actually could work through pre-clinical research, which includes biochemistry, cell culture, and animal models. These sorts of supporting preclinical evidence are what we refer to when we refer to the “prior plausibility” of a clinical study. But this kind of evidence alone is not sufficient to support using the therapy in real patients except under experimental conditions, or when the urgency to intervene is great enough to balance the significant uncertainty about the effects of the intervention.
In support of this conclusion, we can consider the inherent unreliability of individual human judgments and all the many ways in which inadequately controlled research can mislead us. And we can reflect on how promising results in early trials often melt away when better, larger, more rigorous studies are done that better control for bias (the so-called Decline Effect). And it is not at all difficult to compile a large list of examples of the harm inadequately studied medical interventions can cause.
But what I’d like to do here is focus on a particularly good specific example of why thorough clinical trial evaluation of promising ideas is not just a nice extra to confirm what we already believe is true, it is the only way to genuinely know whether our treatments to more good than harm.