Shares

On Friday, you might have noticed that Mark Crislip hinted at a foreshadowing of a blog post to come. This is that blog post. He knew it was coming because when I saw the article that inspired it, I sent an e-mail to my fellow bloggers marking out my territory like a dog peeing on every tree or protecting my newfound topic like a mother bear protecting her cubs. In other words, I was telling them all to back off. This article is mine.

Mine! Mine! Mine! I tell you!

My extreme territorial tendencies (even towards my friends and colleagues) notwithstanding on this issue aside, if you read Mark’s post (and if you didn’t go back and read it now—seriously, go now), you might also remember that he was discussing a “reality bias” in science-based medicine (SBM), a bias that we like to call prior plausibility. In brief, positive randomized clinical trials (RCTs) testing highly implausible treatments are far more likely to be false positives than RCTs testing more plausible treatments. That is the lesson that John Ioannidis has taught us and that I’ve written about multiple times before, as have other SBM bloggers, most prominently Kimball Atwood, although nearly all of us have chimed in at one time or another about this issue.

Apparently a homeopath disagrees and expressed his disagreement in an article published last week online in Medicine, Health Care, and Philosophy entitled Plausibility and evidence: the case of homeopathy. You’ll get an idea of what it is that affected us at SBM like the proverbial matador waving his cape in front of a bull by reading this brief passage from the abstract:

Prior disbelief in homeopathy is rooted in the perceived implausibility of any conceivable mechanism of action. Using the ‘crossword analogy’, we demonstrate that plausibility bias impedes assessment of the clinical evidence. Sweeping statements about the scientific impossibility of homeopathy are themselves unscientific: scientific statements must be precise and testable.

Scientific. You keep using that word. I do not think it means what you think it means. Of course, his being a homeopath is about as close to a guarantee as I can think of that a person doesn’t have the first clue what is and is not scientific. If he did, he wouldn’t be a homeopath. Still, this particular line of attack is often effective, whether yielded by a homeopath or other CAM apologist. After all, why not test these therapies in human beings and see if they work? What’s wrong with that? Isn’t it “close-minded” to claim that scientific considerations of prior plausibility consign homeopathy to the eternal dustbin of pseudoscience?

Not at all. There’s a difference between being open-minded and being so “open-minded” that your brains threaten to fall out. Guess which category homeopaths like Rutten fall into. But to hear them tell it, homeopathy is rejected because because we scientists have a “negative plausibility bias” towards it. At least, that’s what Rutten and some other homeopaths have been trying to convince us. This article seems to be an attempt to put some meat on the bones of their initial trial balloon of this argument published last summer, which Steve Novella duly deconstructed.

Before I dig in, however, I think it’s necessary for me to “confess” my bias and why I think it should be your bias too.

In which I confess my bias

Regular readers might have noticed that we write about homeopathy a lot on this blog. You might wonder why. Indeed, sometimes I myself wonder why. After all, if you want to come up with a list of the top three most ridiculous alternative medicine modalities with a large following, surely homeopathy will almost always be on the list, along with energy healing modalities (such as reiki) and a third nutty modality to be named later whose identity will be left for the reader for later given that there is likely to be some disagreement about it.

In any case, among highly implausible alternative medicine “healing systems,” homeopathy is at or near the top of the heap, reigning supreme. After all, given its twin pillars of “like cures like” and the law of infinitesimals, the former of which says that to relieve a set of symptoms you choose a remedy that causes those symptoms in healthy people and the latter of which says that those “like” remedies get stronger if they are highly diluted in serial steps—but only if they are vigorously shaken or “succussed” between each step. The first principle has no basis in physiology, pharmacology, biochemistry, or medicine (the claims of homeopaths to co-opt a real phenomenon known as hormesis notwithstanding), while the second principle so thoroughly violates the laws of chemistry and physics that, for it to be true huge swaths of these disciplines that have been well-established through hundreds of years of experimentation and observation would have to be not just wrong, but spectacularly wrong. One must concede that it’s possible that this latter principle might be true, but the odds that it is are about as infinitesimal as the amount of starting remedy in a 30 C homeopathic remedy. (That’s a 1 in 1060 chance, for those not familiar with homeopathy.) For all practical intents and purposes, the chances that homeopathy can work is zero. It is just water with its believer’s magical intent imagined into it.

So ridiculous is homeopathy that I sometimes feel that I and my fellow supporters of SBM are firing Howitzers at an ant when we take so much time and effort to explain why homeopathy is nonsense. On the other hand, it is homeopathy’s monumental lack of scientific plausibility that makes it a perfect teaching tool explaining the difference between science-based medicine (SBM) and evidence-based medicine (EBM). Specifically, because clinical trials have unavoidable shortcomings and biases, even at a p=0.05, which would imply only approximately a 5% chance that a given trial’s apparently positive results could be due to random chance alone. As John Ioannidis has taught us, in clinical trials as practiced in the real world, the chance is much higher that any given positive trial is a false positive. As explained in so much detail by Kimball Atwood, that also means that, the lower the prior plausibility of a remedy working, the higher the chance of false positive trials. This is exactly what we see in homeopathy, hence the panoply of homeopathy trials showing “positive” results in which the treatment group is barely different from the control and/or the results barely reach statistical significance. With something like homeopathy, which violates the laws of so many sciences, it is relatively easy to make the case that it takes a lot more than a few equivocal clinical trials to show that so much well-established science is wrong. Apparently positive clinical trials of homeopathy are measuring, in essence, the noise inherent in doing clinical trials.

Although most physicians and clinical investigators don’t think about it consciously they tend to have a bias for plausible hypotheses and treatments in evidence-based medicine and against implausible hypotheses. This bias is certainly not inherent in EBM, as we have described may times before. EBM, after all, relegates basic science considerations to the very bottom rung of its ladder of evidence, below even expert opinion. Clinical trial evidence and epidemiology are all, and, although EBM aficionados deny it, the way EBM is practiced it does appear to worship the randomized clinical trial (RCT) above all else. In fact, that “plausibility bias” that most physicians have often manifests itself as difficulty believing that there is a problem with EBM, that EBM can go so off the rails when it comes to CAM because it really has no mechanism to take plausibility into account. Indeed, it’s been speculated right here on this very blog that the reason prior plausibility is not built right into EBM, so to speak, is because the founders of EBM suffered from it. They assumed that treatments would not reach the stage of large RCTs if they had not proven themselves plausible first through preclinical evidence in laboratory studies, animal experiments, and studies of pathology and lab tests. Under this view, it simply never occurred to the gods of EBM that something as ridiculous as homeopathy could reach the stage of RCTs because they suffered from plausibility bias that blinded them to the very possibility of that happening!

Whether that’s true or not, I don’t know, but it would explain a lot. Either way, as we have pointed out, SBM tries to restore to EBM what it is missing: A consideration of prior plausibility based on scientific considerations. In practice, this is more useful for eliminating incredibly implausible treatments, such as homeopathy and reiki, than it is for putting hard numbers on prior plausibility for treatments because it is not always necessary to estimate a pre-trial probability of success, except when it so low that it would take an incredible amount of evidence to overturn existing knowledge, as it would for homeopathy or reiki. Here’s my plausibility bias: For something like homeopathy or reiki, either of which would require the rewriting of huge swaths of science to become plausible, I consider it reasonable to require supporting evidence at least in the same order of magnitude of quantity and quality as the evidence showing that homeopathy or reiki cannot work to make it reasonable to start to think that either could work. Or, to put it much more simply, extraordinary claims require extraordinary evidence.

That’s my plausibility bias. I’m biased in favor of science and reason and against magical thinking like homeopathy and reiki. You should be biased too.

The homeopaths attack

After I had stopped laughing in response to seeing homeopaths lecture scientists on what is and is not scientific, I delved into the paper. Rutten et al try (and fail—after all they are homeopaths) to establish their scientific bona fides righ in the second paragraph:

The authors of the present paper are doctors and scientists with an interest in homeopathy, committed to the scientific method in researching and practising it. We are qualified in medicine and science and started practising these in conventional contexts, gradually becoming convinced that homeopathy is an effective option, supplementary to rather than conflicting with conventional medicine. We concur with Hansen and Kappel that the disagreement concerning the interpretation of reviews of randomised controlled trials (RCTs) is rooted in prior beliefs and their influence on the perception of evidence. We do not concur, however, with their assumption that the homeopathy community’s positive view of the evidence is due to a rejection of the naturalistic scientific outlook. We ourselves, for example, do not reject any part of the naturalistic outlook.

My first temptation was to point out that the very fact that they are homeopaths means that they are either deluding themselves or lying when they claim that they do not reject any part of the naturalistic outlook. Homeopathy, after all, is rooted in the principles of sympathetic magic, not science. For instance, homeopathy’s law of similars (“like cures like”) is uncannily similar to Sir James George Frazer’s Law of Similarity as described in The Golden Bough (1922) as one of the implicit principles of magic. In addition, the concept that water can somehow retain the imprint of substances with which it’s been in contact, which really underlies the belief among homeopaths that remedies diluted to nonexistence (basically anything diluted more than around 12 C—14C or 15C, to be safe) can have biological effects, is very much like the Law of Contagion. Read the following passage from The Golden Bough and tell me that it doesn’t sound almost exactly like homeopathy:

If we analyse the principles of thought on which magic is based, they will probably be found to resolve themselves into two: first, that like produces like, or that an effect resembles its cause; and, second, that things which have once been in contact with each other continue to act on each other at a distance after the physical contact has been severed. The former principle may be called the Law of Similarity, the latter the Law of Contact or Contagion. From the first of these principles, namely the Law of Similarity, the magician infers that he can produce any effect he desires merely by imitating it: from the second he infers that whatever he does to a material object will affect equally the person with whom the object was once in contact, whether it formed part of his body or not. Charms based on the Law of Similarity may be called Homoeopathic or Imitative Magic. Charms based on the Law of Contact or Contagion may be called Contagious Magic.

A later passage by Sir Frazer is an excellent criticism of the two pillars of homeopathy:

Homoeopathic magic is founded on the association of ideas by similarity: contagious magic is founded on the association of ideas by contiguity. Homoeopathic magic commits the mistake of assuming that things which resemble each other are the same: contagious magic commits the mistake of assuming that things which have once been in contact with each other are always in contact. But in practice the two branches are often combined; or, to be more exact, while homoeopathic or imitative magic may be practised by itself, contagious magic will generally be found to involve an application of the homoeopathic or imitative principle.

See what I mean when I say that the ideas behind homeopathy resemble sympathetic magic far more than they resemble science? From my perspective, all homeopaths—and I do mean all homeopaths—hold views that reject science, no matter how much they fool themselves into thinking they are scientific and buy into the naturalistic world view. I could go on to demonstrate how much of homeopathy is rooted in prescientific vitalism, using Samuel Hahnemann’s own words, but I think you get the idea. Homeopathy is magic water made magic using thought processes akin to those used in voodoo when voodoo practitioners make voodoo dolls.

It is also rather interesting how Rutten et al are so willing to accept science when it comes to RCT evidence but reject the much larger and far more robust body of science that underlies the pre-trial assessment of prior probability that says that homeopathy can’t work. They willfully reject the concept that extraordinary claims require extraordinary evidence, and homeopathy is nothing if not a highly extraordinary set of claims. Instead, Rutten et al make an analogy to crossword puzzles. This analogy is actually rather apt, but not in the way our unhappy homeopaths think it is. Basically, here is the analogy as described by Rutten et al:

Sometimes new evidence overturns theory, but sometimes not; the context is crucial. This has been expressed in terms of a crossword analogy (Haack 1998): the correctness of an entry in a crossword depends upon how well it is supported by the clue, whether it fits with intersecting entries, how reasonable those other entries are, and how much of the crossword has been completed. In this analogy, for homeopathy, the primary entry is: “Does it work (other than by placebo effects)?” The secondary intersecting entries are concerned with “How does, or could, it work?”

Although Rutten et al will never admit it, this analogy is an excellent one for why the occasional “positive” clinical trial of homeopathy does not overthrow the existing scientific paradigm that concludes that homeopathy can’t work, that it is nothing but water, and that any apparently positive effects seen are due either to placebo, random chance, or bias and/or shortcomings in the RCTs. Such trials do not fit with “multiple intersecting entries” in physics, chemistry, and biology that are all consistent with the impossibility of homeopathy; i.e., they do not fit into the crossword puzzle. The only way they could be made to fit into the crossword puzzle would be if homeopathy were shown in a reproducible fashion to cure incurable diseases, such as metastatic pancreatic cancer, in which case homeopathy might go into the crossword puzzle and force the puzzle solver to start rethinking other answers to fit with homeopathy.

In other words, clinical evidence could make us question the rest of the “crossword puzzle” but only if it’s clinical evidence that is so extraordinary in result, quality, and quantity that it starts to rival the existing evidence from multiple disciplines that do not support homeopathy. No such evidence exists for homeopathy, and, in fact, the overall weight of the clinical evidence is consistent with homeopathy not working any more effectively than placebo. Indeed, Ruten et al wrongly relegate the question of how homeopathy could work to a secondary question, and here’s why: When, for a therapy to work the very laws of physics would have to be, as I say so often, not just wrong but spectacularly wrong, the question of how it could work is not secondary. This is in marked contrast to drugs (which inevitably work by either binding to a biological molecule or otherwise reacting somehow), in which case not knowing the exact mechanism is not as concerning. Even cases like the discovery that H. pylori causes duodenal ulcers is not a refutation of this principle with respect to homeopathy. After all, as implausible as the hypothesis that it was a particular bacterial species that was responsible for peptic ulcers in many cases, it did not require the violation of the laws of physics to imagine that a bacterial infection could somehow cause ulcers.

Rutten et al spend considerable verbiage listing the usual suspects for homeopathy, including old meta-analyses, various clinical trials, and, of course the infamous basophil degranulation experiments by Jacques Benveniste. These have been fodder many times before on this blog; so I don’t really want to dwell on them other than to note that in particular Rutten et al reserve most of their vitriol for a meta-analysis and systematic review of the literature by Shang et al published several years ago in The Lancet that found that homeopathy effects are placebo effects. Basically, Rutten et al basically rehash Rutten’s criticisms of Shang’s analysis. These are criticisms I dealt with in detail, and four years of aging don’t make them any better. In fact, the apologia based on “clinical evidence” is nothing that we haven’t heard before and nothing worth rehashing here (other than a link to my previous deconstruction) because the point of Rutten et al is to attack what they call “plausibility bias.” All the trotting out of clinical evidence that allegedly supports homeopathy is in reality a massively flawed lead-in, a thin mint wafer to cleanse the palate, so to speak, to the main argument, which is based on how Shang’s meta-analysis and other clinical trials allegedly support homeopathy but are often cited as evidence against homeopathy.

First, Rutten et al distinguish between homeopathic dilutions in which there might still be some of the original remedy left (generally less than 12C or so, but in reality any homeopathic dilution that gets higher than 7C (10-14) is probably in the femtomolar range or lower, and there aren’t very many substances that have significant biological effects at such a low concentration. None of this stops Rutten et al from proclaiming:

There are obvious sources of pre-trial belief. These include well documented paradoxical low-dilution effects. The basic idea of homeopathy is the exploitation of the paradoxical secondary effects of low doses of drugs. Secondly, reverse or paradoxical effects of drugs and toxins in living organisms as a function of dose or time are very widely observed in pharmacology and toxicology. They are variously referred to as hormesis (the stimulatory or beneficial effects of small doses of toxins) hormligosis, Arndt- Schulz effects, rebound effects, dose-dependent reverse effects and paradoxical pharmacology (Calabrese and Blain 2005; Calabrese et al. 2006; Bond 2001; Teixeira 2007, 2011).

Repeat after me: Hormesis does not justify homeopathy. It’s an analogy that homeopaths love because it’s a hypothesis that states that some substances that are toxic at high doses might be benign or even beneficial at lower doses. (Look back to the fun I had with Ann Coulter’s invocation of hormesis to try to convince you that radiation from the Fukushima nuclear reactor is in fact good for you for an explanation.) This is, of course, wishful thinking on the part of homeopaths, representing extreme over-extrapolation. Hormesis might apply to low doses, but much of homeopathy involves no dose; i.e., dilution far, far beyond the point where it is highly unlikely that even a single molecule of the original substance remains. Rutten et al try to dodge this question by claiming that most homeopathic remedies are not “ultramolecular dilutions” (i.e., dilutions far beyond Avogadro’s number that leave nothing behind). Even if that’s true, many homeopathic dilutions are “ultramolecular” dilutions, and homeopathy does postulate that dilution and succussion do increase the potency of homeopathic remedies. Have Rutten et al forgotten the Law of Infinitesimals?

They haven’t, though. After trying to argue that most homeopathic remedies are not “ultramolecular,” Rutten et al then cite a bunch of dubious in vitro studies claiming that ultramolecular dilutions can have biological effects. I’ve looked at many such studies (for instance, this study of homeopathic remedies on human breast cancer cell lines), and quite often what you find is shoddy methodology, effects of solvents and contaminants, and other potential explanations for the observed results that do not involve having to throw out huge swaths of physics and chemistry. Amusingly, Rutten et al even admit that such results have a serious problem:

A more recent meta-analysis evaluated 67 in vitro biological experiments in 75 research publications and found high-potency effects were reported in nearly 75 % of all replicated studies; however, no positive result was stable enough to be reproduced by all investigators (Witt et al. 2007).

Can you say “publication bias”? Sure, I knew you could.

Can you also say: Anecdotal evidence? Sure, I knew you could:

The other major source of our prior beliefs is practice experience. This may be regarded the lowest level of evidence, but it is under-rated by many (Vandenbroucke 2001). After adding homeopathy to conventional treatment, many unsuccessful cases improved (Marian et al. 2008). The repetitive character of such experiences gradually updated our belief, consistent with Bayesian theory (Rutten 2008).

In other words, Rutten et al admitting that the source of their “positive plausibility bias” towards homeopathy is based on anecdotes. That is, after all, what “practice experience” is: Anecdotes, confirmation bias, and the like. It’s the same reason that Dr. Jay Gordon, for instance, believes that vaccines cause autism when the evidence from large epidemiological studies does not support that belief. He sees what he thinks are cases of “vaccine injury” manifesting itself as autism and, because he believes that vaccines cause autism, attributes his patients’ autism to vaccines. Rutten et al also cite non-blinded, non-randomized “real world” (pragmatic) trials as contributing to their pre-test plausibility bias towards homeopathy.

Pre-trial belief: Science versus anecdote

We have argued that EBM has a shortcoming, and that shortcoming is that EBM does not adequately consider prior probability in assessing evidence. In EBM, clinical evidence is all, and evidence from RCTs (or even better, meta-analyses or systematic reviews of RCTs) rules the heap. This is not unreasonable when RCTs are only performed for hypotheses that have been developed through a scientific process that takes preclinical observations and builds upon them, such that existing evidence deems them reasonably plausible. CAM in general and homeopathy in particular are not such a case. RCTs of homeopathy in essence measure noise, but only positive noise. Some studies will appear to be positive, and publication bias will make sure that the studies where patients receiving homeopathy do worse are unlikely to be published so that we see in the literature only negative studies or studies apparently positive due either to random chance, either alone or combined with poor study design and/or bias. We and others have proposed taking prior probability into consideration, both for deciding what hypotheses to test in clinical trials and how to interpret the results of existing clinical trials.

The fact is that we have always taken plausibility into account in deciding which clinical trials to perform. We have to because we don’t have unlimited resources, human subjects, or researchers to test in an RCT every hypothesis that comes along. We just don’t. In fact, our resources are currently more constrained than they have been in at least 20 years, with NIH pay lines hovering around the 7th percentile in some institutes. Moreover, the very foundations of medical ethics as laid down in the Helsinki declaration require that human subjects experimentation have a strong background of basic science backing it up. The question is: How do we want to prioritize which trials get done? On what do we base our estimates of prior plausibility that color our decisions regarding which clinical trials to carry out and how to interpret data from existing clinical trials? Homeopaths like Rutten and colleagues would propose that we base our estimate of prior plausibility on anecdote, magical thinking, and dubious in vitro and clinical trial evidence, ignoring the massive, well-established prior implausibility of homeopathy that a rational scientific assessment will arrive at. Scientists base their assessment of prior plausibility based on as objective as possible an interpretation of existing scientific data.

I know which one I would choose.

I also have a message for Rutten and is merry band of homeopaths. You accuse us of “plausibility bias” as though that were a bad thing. It’s not. As Mark Crislip pointed out, what plausibility bias should really be called is reality bias. We are biased towards reality. Homeopaths are biased towards what they think is reality but is in actuality magical thinking.

Again, I know which one I choose.

Finally, we don’t have unlimited resources to test every hypothesis that anyone can think up. There isn’t the money. There aren’t enough scientists. Even leaving aside the serious ethical problems that come with testing highly improbable remedies on human subjects, there aren’t enough human subjects to test the promising drugs that have a reasonable probability of working (i.e., of being efficacious and safe) based on preclinical testing. Resource constraints have always existed, and scientists have never just tested whatever the heck they felt like testing. Plausibility has always been a major part of deciding which experiments to do, which promising compounds to take to clinical trials, which treatments to try. Think of it this way: We could estimate plausibility as carefully as we can based on scientific testing, evidence published in the existing scientific literature, and data from small pilot clinical trials. Or, taking the approach of Rutten et al, we can estimate plausibility from anecdotal experience, questionable experiments and clinical trials, and considerations that completely ignore the laws of physics and chemistry.

Again, I know which method I choose.

Shares

Author

Posted by David Gorski

Dr. Gorski's full information can be found here, along with information for patients. David H. Gorski, MD, PhD, FACS is a surgical oncologist at the Barbara Ann Karmanos Cancer Institute specializing in breast cancer surgery, where he also serves as the American College of Surgeons Committee on Cancer Liaison Physician as well as an Associate Professor of Surgery and member of the faculty of the Graduate Program in Cancer Biology at Wayne State University. If you are a potential patient and found this page through a Google search, please check out Dr. Gorski's biographical information, disclaimers regarding his writings, and notice to patients here.