During the most recent kerfuffle about whether or not Evidence-Based Medicine can legitimately claim to be science-based medicine, it became clear to me that a whole, new round of discussion and documentation is necessary. This is frustrating because I’ve already done it several times, most recently less than a year ago. Moreover, I’ve provided a table of links to the whole series at the bottom of each post*…Never mind, here goes, and I hope this will be the last time it is necessary because I’ll try to make this the “go to” series of posts for any future such confusions.
The points made in this series, most of which link to posts in which I originally made them, are in response to arguments from statistician Steve Simon, whose essay, Is there something better than Evidence Based Medicine out there?, was the topic of Dr. Gorski’s rebuttal on Monday of this week, and also from several of the comments following that rebuttal. Mr. Simon has since revised his original essay to an extent, which I’ll take into account. I’ll frame this as a series of assertions by those who doubt that EBM is deficient in the ways that we at SBM have argued, followed in each case by my response.
First, a disclaimer: I don’t mean to gang up on Mr. Simon personally; others hold opinions similar to his, but his essay just happens to be a convenient starting point for this discussion. FWIW, prior to this week I perused a bit of his blog, after having read one of his comments here, and found it to be well written and informative.
What’s in a Name?
One of Mr. Simon’s objections, in his revision, is this:
What is SBM? Here’s a definition found on the opening entry in the SBM blog:
“the use of the best scientific evidence available, in the light of our cumulative scientific knowledge from all relevant disciplines, in evaluating health claims, practices, and products.” https://www.sciencebasedmedicine.org/?p=1
But how does this differ from David Sackett’s definition of EBM?
“the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.” http://www.bmj.com/content/312/7023/71.full
The only substantial difference I see is the adjective “scientific” that appears twice in the definition of SBM. The claim on the SBM blog is that EBM ignores scientific plausibility. Actually, ignores is too strong a word.
“EBM ‘levels of evidence’ hierarchy renders each entry sufficient to trump those below it. Thus a ‘positive’ clinical trial is given more weight than ‘physiology, bench research or “first principles”,’ even when the latter definitively refute the claim.” https://www.sciencebasedmedicine.org/?p=42
(I agree that “ignore” is too strong a word, but I didn’t actually write it that way, as Dr. Gorski pointed out and as I think Mr. Simon was acknowledging above.)
A difference between Sackett’s definition and ours is that by “current best evidence” Sackett means the results of RCTs. I realize that this assertion requires documentation, which will come below. A related issue is the definition of “science.” In common use the word has at least three, distinct meanings: 1. The scientific pursuit, including the collective institutions and individuals who “do” science; 2. The scientific method; 3. The body of knowledge that has emerged from that pursuit and method (I’ve called this “established knowledge”; Dr. Gorski has called it “settled science”).
I will argue that when EBM practitioners use the word “science,” they are overwhelmingly referring to a small subset of the second definition: RCTs conceived and interpreted by frequentist statistics. We at SBM use “science” to mean both definitions 2 and 3, as the phrase “cumulative scientific knowledge from all relevant disciplines” should make clear. That is the important distinction between SBM and EBM. “Settled science” refutes many highly implausible medical claims—that’s why they can be judged highly implausible. EBM, as we’ve shown and will show again here, mostly fails to acknowledge this fact.
Finally, Mr. Simon has misinterpreted our goal at SBM:
But if someone wants to point out that EBM needs work, I’m fine with that. I dislike that they think that EBM needs to be replaced with something better.
You see EBM as being wrong often enough that you see value in creating a new label, SBM. I see SBM as being that portion of EBM that is being done thoughtfully and carefully, and don’t see the need for a new label.
I generally bristle when people want to create a new and improved version of EBM and then give it a new label.
I am as harshly critical of the hierarchy of evidence as anyone. I see this as something that will self-correct over time, and I see people within EBM working both formally and informally to replace the rigid hierarchy with something that places each research study in context. I’m staying with EBM because I believe that people who practice EBM thoughtfully do consider mechanisms carefully. That includes the Cochrane Collaboration.
Mr. Simon, we agree! Yes, we are pointing out that EBM needs work. Yes, SBM is that (tiny) portion of EBM that is being done thoughtfully and carefully, and if it were mainly done that way there would be no need to call attention to the point. Our goal is not to change the name of EBM (“give it a new label”). Our goal is to convince EBM to live up to its current name. Yes, it may self-correct over time, but we are trying to shorten that time. Bad things have unnecessarily happened, in part due to EBM’s scientific blind spot: As currently practiced, it doesn’t rationally consider all the evidence. We don’t see much evidence that people at the highest levels of EBM, eg, Sackett’s Center for EBM or Cochrane, are “working both formally and informally to replace the rigid hierarchy with something that places each research study in context.”
We chose to call our blog “science-based medicine” only because the term “evidence-based medicine” had already been taken, and we needed to distinguish ourselves from the inaccurate use of the word “evidence” in “EBM.” I’ve written about this before, and have made the point utterly clear:
These are the reasons that we call our blog “Science-Based Medicine.” It is not that we are opposed to EBM, nor is it that we believe EBM and SBM to be mutually exclusive. On the contrary: EBM is currently a subset of SBM, because EBM by itself is incomplete. We eagerly await the time that EBM considers all the evidence and will have finally earned its name. When that happens, the two terms will be interchangeable.
Mr. Simon’s interpretation of our view of plausibility, like that of many others, is wrong:
I would argue further that it is a form of methodolatry to insist on a plausible scientific mechanism as a pre-requisite for ANY research for a medical intervention. It should be a strong consideration, but we need to remember that many medical discoveries preceded the identification of a plausible scientific mechanism.
I think, from his revision, that Mr. Simon understood Dr. Gorski’s explanation of why this was wrong, but I’m not certain. The misrepresentation of scientific plausibility is an issue that I’ve faced for years, as explained previously here:
Plausibility ≠ Knowing the Mechanism
Let’s first dispense with a simple misunderstanding: We, by which I mean We Supreme Arbiters of Plausibility (We SAPs) here at SBM, do not require knowing the mechanism of some putative effect in order to deem it plausible. This seems so obvious that it ought not be necessary to repeat it over and over again, and yet the topic can’t be broached without some nebbishy South Park do-gooder chanting a litany of “just because you don’t know how it works doesn’t mean it can’t work,” as if that were a compelling or even relevant rebuttal. Let’s get this straight once and for all: IT ISN’T.
Steve Novella explained why at the Yale conference and again here. We talked about it at TAM7 last summer. For a particularly annoying example, read the three paragraphs beginning with “Mr. Gagnier’s understanding of biological plausibility” here.
OK, I’ll admit that I’m beginning to learn something from such frustration. Perhaps we’ve not been so good at explaining what we mean by plausibility. The point is not that we don’t know a particular mechanism for homeopathy, for example; the point is that any proposed mechanism would necessarily violate scientific principles that rest on far more solid ground than any number of equivocal, bias-and-error-prone clinical trials could hope to overturn. The same is true for “energy medicine” and for claims based on non-existent anatomical structures (iridology, reflexology, auricular acupuncture, meridians, chiropractic “subluxations”), non-existent physiologic functions (“craniosacral rhythms“), or non-existent anatomic-physiologic relations (“neurocranial restructuring,” “detoxification” with coffee enemas, dissolving tumors with orally administered pancreatic enzymes). The spectrum of implausible health claims euphemistically dubbed “CAM” is full of such nonsense.
Reader daedalus2u proposed a useful way to clarify the point:
I think the idea of prior plausibility should actually be reframed into one of a lack of prior implausibility. It isn’t that one should have reasons to positively think that something is plausible before testing it, but rather that one should not be able to come up with reasons (actually data) why it is fatally implausible.
Some of what We deem implausible will not be fatally so, of course. Implausibility can be based not only on established physical and biological knowledge, but also on studies, as is the case for sticking needles into people, injecting them with chelating agents, or claiming that autism is caused by childhood immunizations.
EBM, Basic Science, and RCTs
Steve Simon wrote, “I have not seen any serious evidence of EBM relying exclusively on RCTs. That’s certainly not what David Sackett was proposing in the 1996 BMJ editorial…” And: “No thoughtful practitioner of EBM, to my knowledge, has suggested that EBM ignore scientific mechanisms.”
Want serious evidence? Consider these quotations from Cochrane reviews, originally posted here:
In view of the absence of evidence it is not possible to comment on the use of homeopathy in treating dementia.
There is not enough evidence to reliably assess the possible role of homeopathy in asthma. As well as randomised trials, there is a need for observational data to document the different methods of homeopathic prescribing and how patients respond.
There is currently little evidence for the efficacy of homeopathy for the treatment of ADHD. Development of optimal treatment protocols is recommended prior to further randomised controlled trials being undertaken.
Though promising, the data were not strong enough to make a general recommendation to use Oscillococcinum for first-line treatment of influenza and influenza-like syndromes. Further research is warranted but the required sample sizes are large.
Yes, EBM undervalues basic science and overvalues RCTs when the former is sufficient to reject a claim. EBM also undervalues experimental evidence other than RCTs when the former is sufficient to reject a claim, as will be discussed. Here is how a truly evidence-based review might conclude a discussion of homeopathy for dementia:
The probability that homeopathy is specifically therapeutic for dementia is, for all practical purposes, zero.
The following is from my first post on the topic, in which I reviewed the overwhelming evidence—from basic science and pre-clinical research—that homeopathic ‘remedies’ have no, specific therapeutic actions, and wondered why the most esteemed exponents of EBM have written that such treatments are “promising” and that “further randomized trials are needed.” I included the Center for Evidence-based Medicine’s formal “Levels of Evidence” scheme (not copied here), the pertinent quotation from Sackett’s 1996 editorial, my opinion that this failure of EBM was initially unintended, how Sackett et al eventually did address “CAM,” and the Cochrane abstracts quoted above:
It wasn’t meant to be like this. When I first discussed with my fellow bloggers the curious absence of established knowledge in the EBM “levels of evidence” hierarchy, at least one insisted that this could not be true, and in a sense he was correct. David Sackett and other innovators of EBM do include basic science in their discussions, but they recommend invoking it only when there are no clinical trials to consider:
Evidence based medicine is not restricted to randomised trials and meta-analyses. It involves tracking down the best external evidence with which to answer our clinical questions…And sometimes the evidence we need will come from the basic sciences such as genetics or immunology. It is when asking questions about therapy that we should try to avoid the non-experimental approaches, since these routinely lead to false positive conclusions about efficacy. Because the randomised trial, and especially the systematic review of several randomised trials, is so much more likely to inform us and so much less likely to mislead us, it has become the “gold standard” for judging whether a treatment does more good than harm.
That statement is consistent with EBM’s formal relegation of established knowledge to “level 5,” as seen in the Figure. I am not a historian of EBM and don’t care to be, but I suspect that the explanation for this choice is that “they never saw ‘CAM’ coming.” In other words, it probably didn’t occur to Sackett and other EBM pioneers that anyone would consider performing clinical trials of methods that couldn’t pass the muster of scientific plausibility. Their primary concern was to emphasize the insufficiency of basic science evidence in determining the safety and effectiveness of new treatments. In that they were quite correct, but trials of “CAM” have since reminded us that although established knowledge may be an insufficient basis for accepting a treatment claim, it is still a necessary one.
Take note: Sackett wrote, “we should try to avoid the non-experimental approaches, since these routinely lead to false positive conclusions about efficacy.” My point is that pre-RCT evidence does not routinely (if ever) lead to false negative conclusions. In that passage, moreover, Sackett seems to suggest that the only alternative to a “non-experimental approach” is an RCT; yet there are often other types of experiments that can definitively refute treatment claims, as will be discussed. Eventually Sackett et al did catch wind of “CAM,” but they got it exactly wrong:
Lacking that perspective, Sackett’s Center for Evidence-Based Medicine promulgates an “Introduction to evidence-based complementary medicine” by “CAM” researcher Andrew Vickers. There is not a mention of established knowledge in it, although there are references to several claims, including homeopathy, that are refuted by things that we already know. Vickers is also on the advisory board of the Cochrane CAM Field, along with Wayne Jonas and several other “CAM” enthusiasts.
In another post I cited the 2006 Cochrane Review of Laetrile:
A 2006 Cochrane Review of Laetrile for cancer would, if its recommendations were realized, stand the rationale for RCTs on its head:
The most informative way to understand whether Laetrile is of any use in the treatment of cancer, is to review clinical trials and scientific publications. Unfortunately no studies were found that met the inclusion criteria for this review.
The claim that Laetrile has beneficial effects for cancer patients is not supported by data from controlled clinical trials. This systematic review has clearly identified the need for randomised or controlled clinical trials assessing the effectiveness of Laetrile or amygdalin for cancer treatment.
Why does this stand the rationale for RCTs on its head? A definitive case series led by the Mayo Clinic in the early 1980s had overwhelmingly demonstrated, to the satisfaction of all reasonable physicians and biomedical scientists, that not only were the therapeutic claims for Laetrile baseless, but that the substance is dangerous. The subjects did so poorly that there would have been no room for a meaningful advantage in outcome with active treatment compared to placebo or standard treatment… The Mayo case series “closed the book on Laetrile,” the most expensive health fraud in American history at the time, only to have it reopened more than 20 years later by well-meaning Cochrane reviewers who seemed oblivious of the point of an RCT.
Is that review not serious evidence that the Cochrane Collaboration overvalues RCTs? In this case, moreover, it wasn’t only basic science that Cochrane ignored, but a definitive piece of clinical research that was not an RCT. Sure, I know that Cochrane is not the only pinnacle of EBM, but it’s one of them.
In both that post and another, I called attention to a statement that Edzard Ernst, the most prolific EBM-style “CAM” researcher of the past 20 years, had made in 2003:
A couple of years ago I was surprised to find that one of the authors of [the Cochrane Laetrile] review was Edzard Ernst, a high-powered academic who over the years has undergone a welcomed transition from cautious supporter to vocal critic of much “CAM” research and many “CAM” methods. He is now a valuable member of our new organization, the Institute for Science in Medicine, and we are very happy to have him. I believe that his belated conversion to healthy skepticism was due, in large part, to his allegiance to the formal tenets of EBM. I recommend a short debate published in 2003 in Dr. Ernst’s Focus on Alternative and Complementary Therapies (FACT), pitting Jacqueline’s countryman Cees Renckens against Dr. Ernst himself. Dr. Ernst responded to Dr. Renckens’s plea to apply science to “CAM” claims with this statement:
In the context of EBM, a priori plausibility has become less and less important. The aim of EBM is to establish whether a treatment works, not how it works or how plausible it is that it may work. The main tool for finding out is the RCT. It is obvious that the principles of EBM and those of a priori plausibility can, at times, clash, and they often clash spectacularly in the realm of CAM.
I’ve discussed that debate before on SBM, and I consider it exemplary of what is wrong with how EBM weighs the import of prior probability. Dr. Ernst, if you are reading this, I’d be interested to know whether your views have changed. I hope that you no longer believe that human subjects ought to be submitted to a randomized, controlled trial of Laetrile!
Uh, talk about “suggesting that EBM ignore scientific mechanisms”! When the principles of EBM and those of a priori plausibility clash spectacularly in the realm of CAM, it is a priori plausibility that should take precedence—not merely because the latter renders RCTs unnecessary, but because for such questions RCTs tend to confuse rather than clarify, as will be discussed further in the next part of this series.
I am happy to report that Dr. Ernst wrote me privately about that passage, with the answer that I’d mostly hoped for:
Have I changed my mind? I am not as sure as the sceptics seem to be that I ever was a supporter of CAM and I am still a bit sceptic about the sceptics [which perhaps makes me the “ueber-sceptic”]. Would I argue for more Laetrile studies? NO.
Even more to the point, perhaps, is a recent editorial by Dr. Ernst in which he calls homeopathy “absurd” and compares it to other, obvious absurdities, which I doubt he’d have done only a few years ago:
Should we keep an open mind about astrology, perpetual motion, alchemy, alien abduction, and sightings of Elvis Presley? No, and we are happy to confess that our minds have closed down on homeopathy in the same way.
This kind of clear thinking, as easy as it ought to be for intelligent people, seems oddly difficult for those steeped in EBM. I’ll offer another example in part 2, as part of my response to Mr. Simon’s assertion that “There is some societal value in testing therapies that are in wide use, even though there is no scientifically valid reason to believe that those therapies work.”
*The Prior Probability, Bayesian vs. Frequentist Inference, and EBM Series:
16. What is Science?