Articles

Of SBM and EBM Redux. Part III: Parapsychology is the Role Model for “CAM” Research

This is the third post in this series*; please see Part II for a review. Part II offered several arguments against the assertion that it is a good idea to perform efficacy trials of medical claims that have been refuted by basic science or by other, pre-trial evidence. This post will add to those arguments, continuing to identify the inadequacies of the tools of Evidence-Based Medicine (EBM) as applied to such claims.

Prof. Simon Replies

Prior to the posting of Part II, statistician Steve Simon, whose views had been the impetus for this series, posted another article on his blog, responding to Part I of this series. He agreed with some of what both Dr. Gorski and I had written:

The blog post by Dr. Atwood points out a critical distinction between “biologically implausible” and “no known mechanism of action” and I must concede this point. There are certain therapies in CAM that take the claim of biological plausibility to an extreme. It’s not as if those therapies are just implausible. It is that those therapies must posit a mechanism that “would necessarily violate scientific principles that rest on far more solid ground than any number of equivocal, bias-and-error-prone clinical trials could hope to overturn.” Examples of such therapies are homeopathy, energy medicine, chiropractic subluxations, craniosacral rhythms, and coffee enemas.

The Science Based Medicine site would argue that randomized trials for these therapies are never justified. And it bothers Dr. Atwood when a systematic review from the Cochrane Collaboration states that no conclusions can be drawn about homeopathy as a treatment for asthma because of a lack of evidence from well conducted clinical trials. There’s plenty of evidence from basic physics and chemistry that can allow you to draw strong conclusions about whether homeopathy is an effective treatment for asthma. So the Cochrane Collaboration is ignoring this evidence, and worse still, is implicitly (and sometimes explicitly) calling for more research in this area.

On the other hand:

There are a host of issues worth discussing here, but let me limit myself for now to one very basic issue. Is any research justified for a therapy like homeopathy when basic physics and chemistry will provide more than enough evidence by itself to suggest that such research is futile(?) Worse still, the randomized trial is subject to numerous biases that can lead to erroneous conclusions.

I disagree for a variety of reasons.

Prof. Simon offered 5 reasons, quoted here in part:

  • It’s good for business. I don’t want to sound shallow, but there’s money to be made by statisticians when research is done, and if I make a few bucks and in the process help to make the research more rigorous, that’s a win-win situation. I’ve not done much work with CAM, but I have helped on several projects at Cleveland Chiropractic College…
  • Everyone deserves their day in court. I believe that if someone is sincere in testing whether a therapy is effective or not, then they deserve my help…
  • CAM therapies represent an enormous expenditure of limited health care dollars, and if research can help limit the fraction of CAM expenditures that are inappropriate then that represents a good use of scarce research dollars…
  • We have to trust that the system can work. Randomized trials are indeed subject to many biases, and it is worth noting them. But are the biases so serious that they will lead to incorrect conclusions about CAM? [etc.]
  • Scientific testing is the norm for other claims that lack scientific plausibility. I am a regular (reader) of Skeptical Inquirer and Skeptic Magazine, and when someone makes a claim about ghosts, telekinesis, or reincarnation, they’ll point out all the existing knowledge that makes such claims unbelievable. But then they’ll still go to the haunted house or set up a spoon bending experiment or reinterview people who remember past lives. These claims have even less credibility than much of CAM research, but they are still being tested. So why not test CAM the same way?

I take the “I don’t want to sound shallow” remark at face value, although I’d remind Prof. Simon that without the ‘subluxation,’ whose fatal implausibility he appears to have conceded, chiropractic is left with very little. Reasons 1, 2, and 4 amount to the same assertions: that efficacy trials (“testing whether a therapy is effective or not”) of futile methods have something worthwhile to add to the already compelling evidence against those methods; that they will be performed safely and ethically; that they will dependably show that the methods are ineffective beyond ‘placebo effects’ (we’ve already agreed on this, no?); and that EBM referees, such as those at Cochrane, will subsequently judge such methods futile.

I’ve previously offered several counterexamples to those assertions, including the Cochrane homeopathy reviews quoted in Part I, and the Cochrane “Touch Therapies” review linked from Part II. I’ve also offered examples of methods that are not quite as implausible but are dangerous and have been sufficiently refuted by other means, including biology and even clinical tests, but that EBM experts have deemed worthy of further testing: Laetrile (discussed in Part I), Na2EDTA “chelation therapy” for atherosclerotic cardiovascular disease, and the “Gonzalez Regimen” for cancer of the pancreas (each discussed in Part II).

Can “Research Help Limit CAM Expenditures”?

I’ll discuss such issues more below, including a response to Prof. Simon’s point 5, but first let me briefly address his point 3. The assertion that there is societal value to studying implausible methods has been the usual justification for such trials, as I discussed at some length in Part II. It began as an untested presumption in itself, and it does not excuse endangering experimental subjects or siphoning scarce public funds away from promising research. Moreover it would be, at best, redundant: if other facts are sufficient to refute a claim, there is no point in subjecting the claim to a trial. If people don’t understand that point, then it is the job of experts to explain it to them, not to devalue science by granting every preposterous notion a “day in court” that it has already had, or by issuing preposterous opinions such as “it is not possible to comment on the use of homeopathy in treating dementia.”

Regarding the presumption that “research can help limit the fraction of CAM expenditures that are inappropriate,” the evidence, such as it is, suggests otherwise. In the 1980s, Petr Skrabanek could accurately report that “numerous controlled trials have shown that acupuncture is nothing more than a placebo.” Yet even as additional, abundant, increasingly rigorous trials have relentlessly shown the same thing, acupuncture has steadily increased in popularity. The same is true for homeopathy.

Referring to slightly more plausible methods, Josephine Briggs, the Director of the NCCAM, reported that sales of echinacea, glucosamine-chondroitin sulfate, and gingko biloba had declined after disconfirming trials funded by her Center, but according to Steve Novella the decline was only temporary for echinacea. Perhaps some industrious reader can find data for the other two preparations—I don’t feel like shelling out $200+.

Such Trials Don’t Work

The final reason that efficacy trials of highly implausible claims are a bad idea is that they don’t work very well: they tend to yield, in the aggregate, equivocal, rather than merely disconfirming results. Yes, the biases are so serious that they have led to incorrect conclusions about CAM, at least for a substantial period. This is something that most physicians and even many statisticians seem unaware of, although it was utterly predictable. I’ve discussed this at length, beginning here:

EBM and “CAM”

To many in this era of EBM it seems self-evident that all unproven methods, including homeopathy, should be subjected to such scrutiny. After all, the anecdotal impressions that are typically the bases for such claims are laden with the very biases that blinded RCTs were devised to overcome. This opinion, however, is naive. Some claims are so implausible that clinical trials tend to confuse, rather than clarify the issue. Human trials are messy. It is impossible to make them rigorous in ways that are comparable to laboratory experiments. Compared to laboratory investigations, clinical trials are necessarily less powered and more prone to numerous other sources of error: biases, whether conscious or not, causing or resulting from non-comparable experimental and control groups, cuing of subjects, post-hoc analyses, multiple testing artifacts, unrecognized confounding of data due to subjects’ own motivations, non-publication of results, inappropriate statistical analyses, conclusions that don’t follow from the data, inappropriate pooling of non-significant data from several, small studies to produce an aggregate that appears statistically significant, fraud, and more.

Most of those problems are not apparent in primary reports. Several have already been discussed or referenced elsewhere on this site: here, here, here and here, for example. Academics active in the EBM movement are aware of most of them and want to correct them—as a quick scan of the contents of almost any major medical journal will reveal.

It is clear that such biases are more likely to skew the results of studies that are funded or performed by advocates. This has been found in studies of trials funded by drug companies, for example, as referenced here. In the case of “CAM,” the charge is supported by the preponderance of favorable reports in advocacy journals (here, here, and here) and by examples of overwhelmingly favorable reports emanating from regions with strong political motivations.

For those reasons we can predict that RCTs of ineffective claims championed by impassioned advocates will demonstrate several characteristics. Small studies, those performed by advocates or reported in advocacy journals, and those judged to be of poor quality will tend to be “positive.” The larger the study and the better the design, the more likely it is to be “negative.” Over time, early “positive” trials and reviews will give way to negative ones, at least among those judged to be of high quality and reported in reputable journals. In the aggregate, trials of ineffective claims championed by impassioned advocates will appear to yield equivocal rather than merely “negative” outcomes. The inevitable, continual citations of dubious reports will lead some to judge that the aggregate data are “weakly positive” or that the treatment is “better than placebo.” An example is the claim that stimulation of the “pericardium 6” acupuncture point is effective in the prevention and treatment of post-operative nausea and vomiting—a purportedly proven “CAM” method.

Homeopathic “Remedies” are Placebos

After 200 years and numerous studies, including many randomized, controlled trials (RCTs) and several meta-analyses and systematic reviews, homeopathy has performed exactly as described above. The best that proponents can offer is equivocal evidence of a weak effect compared to placebo. That is exactly what is expected if homeopathy is placebo.

Nevertheless, EBM advocates on the whole don’t see it that way. Those who want to see homeopathy vindicated, such as homeopath Wayne Jonas, the former director of the NIH Office of Alternative Medicine, point to the weakly positive evidence. Others, even those who find homeopathy implausible, are so convinced that EBM can answer the question (“Either homeopathy works or controlled trials don’t!”) that they call for more trials, with no end in sight. Such judgments expose a major weakness in EBM that is not apparent when the exercise is applied to plausible claims.

“CAM” Research and Parapsychology

That passage was a prelude to introducing the EBM “Levels of Evidence” scheme and the Cochrane abstracts later discussed in Part I of this series. It applies equally well to acupuncture, “energy medicine,” and other highly implausible claims that have been subjected to efficacy trials.

Here we come to Prof. Simon’s point 5, that “scientific testing is the norm for other claims that lack scientific plausibility,” such as ghosts and telekinesis. It is true that “psychic detectives” such as Randi and Joe Nickell and Ray Hyman and Richard Wiseman have tested such claims and continue to do so, but Prof. Simon ought to understand the differences between such tests and what’s at issue here. The former tend to be of the sort that I favored in Part II: simple (bias-resistant), inexpensive, performed by skeptics, with the onus of proof placed on the claimants. Such testing, moreover, is fun, which I believe is the main reason that Randi and others are drawn to it.

Typical “CAM” efficacy trials are altogether different, as I began to explain in Part II: they are expensive, messy, and bias-prone, and those who perform them are often enthusiasts or otherwise credulous. Such trials are akin to tests of telekinesis performed not by Randi or Wiseman, but by hopeful or true-believing parapsychologists—and that is exactly what is reflected in EBM-style reviews of their outcomes. Ironically, some “CAM” efficacy trials really are tests of telekinesis performed by true-believing parapsychologists, and much of “CAM” is nothing more than recycled psi claims now pitched to a naïve audience, as discussed here under “The Psi Myth.”

Thus homeopath David Reilly was correct, as I wrote here, when he asserted that “either homeopathy works or controlled trials don’t”:

…but not in the way that he supposed. If there is anything that the history of parapsychology can teach the biomedical world, it is the point just made: that human RCTs, as good as they are at minimizing bias or chance deviations from population parameters, cannot ever be expected to provide, by themselves, objective measures of truth. There is still ample room for erroneous conclusions. Without using broader knowledge (science) to guide our thinking, we will plunge headlong into a thicket of errors—exactly as happened in parapsychology for decades and is now being repeated by its offspring, “CAM” research.

Yes, “CAM” has much to owe to parapsychology, none of it good. Prof. Simon, a statistician, might consider that parapsychology has flirted with the barely positive side of the null effect for decades. Its apparent successes, modest and irreproducible though they’ve been, have rendered it an immortal field of fruitless inquiry: a pathological science.

John Ioannidis has this to say about such a field:

History of science teaches us that scientific endeavor has often in the past wasted effort in fields with absolutely no yield of true scientific information, at least based on our current understanding. In such a “null field,” one would ideally expect all observed effect sizes to vary by chance around the null in the absence of bias. The extent that observed findings deviate from what is expected by chance alone would be simply a pure measure of the prevailing bias.

EBM, Eventually, Sort of Works

For fairness’ sake, let me mention that two veteran “CAM” researchers, Edzard Ernst and R. Barker Bausell (a statistician), eventually decided that most of what they had studied was bogus, and they seem to have arrived at this realization after examining the results of efficacy trials. Thus it is true that EBM, as it is currently practiced, can lead some researchers to rational conclusions.

The problem is far from solved, however: in the cases of the two researchers just mentioned, it took years for them to arrive at a truth that was always staring them in the face (as I discussed in Part I and elsewhere), during which time the waste, the unethical treatment of human subjects, and the false promise that is “CAM” research marched on. Because of the continued, inevitable, equivocal results of such research, moreover, the views of Ernst and Bausell are not shared by other “CAM” research enthusiasts, and probably won’t be anytime soon.

EBM Ignores External Evidence, but not Entirely: a Prelude to Part IV

To reiterate, the major problem with EBM, as it has been applied to implausible medical claims, is that it fails to give adequate strength to evidence from sources other than RCTs. Yet RCTs involving numerous experimental variables and outcomes, as have been typical for “CAM” efficacy trials, are prone to numerous errors and biases, whereas other sources of evidence can be definitive—as has been the case for homeopathy, Laetrile, Therapeutic Touch, chelation for atherosclerosis, and Craniosacral Therapy, for example.

For the first time in several years, motivated by this series, I’ve looked at a few complete Cochrane “CAM” Reviews. In the final posting I’ll discuss more external evidence that is missing from those reviews, but I’ll also report a couple of pleasant surprises. It turns out that Steve Simon was not entirely wrong when he asserted that “people within EBM (are) working both formally and informally to replace the rigid hierarchy with something that places each research study in context.” They have a long way to go, but there is at least a suggestion of change in that direction.

I’ll also address, I hope briefly, this statement by Prof. Simon:

Also how can we invoke scientific plausibility in a world where intelligent people differ strongly on what is plausible and what is not? Finally, is there a legitimate Bayesian way to incorporate information about scientific plausibility into a Cochrane Collaboration systematic overview(?)

Laydah.

*The Prior Probability, Bayesian vs. Frequentist Inference, and EBM Series:

1. Homeopathy and Evidence-Based Medicine: Back to the Future Part V

2. Prior Probability: The Dirty Little Secret of “Evidence-Based Alternative Medicine”

3. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued

4. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued Again

5. Yes, Jacqueline: EBM ought to be Synonymous with SBM

6. The 2nd Yale Research Symposium on Complementary and Integrative Medicine. Part II

7. H. Pylori, Plausibility, and Greek Tragedy: the Quirky Case of Dr. John Lykoudis

8. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 1

9. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 2

10. Of SBM and EBM Redux. Part I: Does EBM Undervalue Basic Science and Overvalue RCTs?

11. Of SBM and EBM Redux. Part II: Is it a Good Idea to test Highly Implausible Health Claims?

12. Of SBM and EBM Redux. Part III: Parapsychology is the Role Model for “CAM” Research

13. Of SBM and EBM Redux. Part IV: More Cochrane and a little Bayes

14. Of SBM and EBM Redux. Part IV, Continued: More Cochrane and a little Bayes

15. Cochrane is Starting to ‘Get’ SBM!

16. What is Science? 

 

Posted in: Acupuncture, Clinical Trials, Energy Medicine, Faith Healing & Spirituality, Herbs & Supplements, Homeopathy, Medical Academia, Medical Ethics, Science and Medicine

Leave a Comment (30) ↓

30 thoughts on “Of SBM and EBM Redux. Part III: Parapsychology is the Role Model for “CAM” Research

  1. daijiyobu says:

    “Intelligent people differ strongly on what is plausible and what is not” strikes me as a form of epistemic relativism.

    But do ‘informed people’ aka experts differ strongly aka broadly?

    If that statement were altered to “expert opinion differs strongly / broadly on what is plausible and what is not”, I think it bears out that attempted across-the-board relativism.

    “Strongly” seems a little too much for an area of expertise.

    -r.c.

  2. Jan Willem Nienhuys says:

    Gorski refers to Skrabanek. The book Follies, and Fallacies In Medicine can be downloaded here. The reference is on page 120, but refers to the article ‘Acupuncture; past, present, and future’. In: Stalker and Glymour (ed.) Examining holistic medicine, Prometheus Books, New York, 1985/1989. It also has been reprinted in Skrabanek’s False Premises, False Promises (2000, Tarragon Press), available as download from the same site, look for page 49-68.

    As regards to homeopathy, there is ample opportunity to do simple ‘skeptical’ tests. Although the truly science defying part of homeopathy is the high dilution stuff, the root of homeopathy still is the similia principle. The homeopathic remedies are administered on the basis of similarity between the so-called drug picture and the subjective complaints of the patients. But in many instances these drug pictures have been obtained with highly diluted samples (quite sensible when testing mercury chloride and arsenic trioxide).

    Even table salt (C30) is in homeopathy a powerful drug. So if homeopaths want to produce scientific proof of their claims, they should first do properly randomized and blinded drug tests. Such tests have been done. One of the firsty RCT’s ever is the famous Nuremberg trial of C30 table salt. In the period 1936-1939 the German health officials forced the homeopaths to perform such trials (as the preliminary phase to testing whether homeopathy could actually cure sick people). These trials ended in a disaster, but the results were more or less suppressed. A ten foot stack of documentation of these experiments vanished without a trace, somewhere in the ’60s

    So there is something to say for testing homeopathy, namely testing their drug pictures in a scientific way, starting with classical substances like Natrum Muriaticum, Sulphur, Silicea or ‘North Pole Magnetism’. It is even not completely silly that a small amount of government money is spent on it, for example a fee for statisticians and public notaries to oversee the blinding and randomisation procedures.

    Meanwhile journal editors should refuse reports on RCTs for homeopathy, unless the drug pictures of the remedies used have been tested scientifically. I predict that the homeopaths will refuse to test their drug pictures.

  3. Jan Willem Nienhuys says:

    Oops! I wrote ‘Gorski’, should be Atwood.

  4. daedalus2u says:

    Simon ignores the most important consideration, the ethics of doing human clinical trials. You can’t do a human trial, unless the humans subjects are treated ethically. That is can’t as in full stop.

    From the Declaration of Helsinki

    http://www.wma.net/en/30publications/10policies/b3/index.html

    “6. In medical research involving human subjects, the well-being of the individual research subject must take precedence over all other interests.”

    “7.The primary purpose of medical research involving human subjects is to understand the causes, development and effects of diseases and improve preventive, diagnostic and therapeutic interventions (methods, procedures and treatments). Even the best current interventions must be evaluated continually through research for their safety, effectiveness, efficiency, accessibility and quality.”

    “12.Medical research involving human subjects must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and adequate laboratory and, as appropriate, animal experimentation. The welfare of animals used for research must be respected. ”

    “16.Medical research involving human subjects must be conducted only by individuals with the appropriate scientific training and qualifications. Research on patients or healthy volunteers requires the supervision of a competent and appropriately qualified physician or other health care professional. The responsibility for the protection of research subjects must always rest with the physician or other health care professional and never the research subjects, even though they have given consent.”

    “21.Medical research involving human subjects may only be conducted if the importance of the objective outweighs the inherent risks and burdens to the research subjects.”

    “24.In medical research involving competent human subjects, each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, and any other relevant aspects of the study. The potential subject must be informed of the right to refuse to participate in the study or to withdraw consent to participate at any time without reprisal. Special attention should be given to the specific information needs of individual potential subjects as well as to the methods used to deliver the information. After ensuring that the potential subject has understood the information, the physician or another appropriately qualified individual must then seek the potential subject’s freely-given informed consent, preferably in writing. If the consent cannot be expressed in writing, the non-written consent must be formally documented and witnessed.”

    “27.For a potential research subject who is incompetent, the physician must seek informed consent from the legally authorized representative. These individuals must not be included in a research study that has no likelihood of benefit for them unless it is intended to promote the health of the population represented by the potential subject, the research cannot instead be performed with competent persons, and the research entails only minimal risk and minimal burden.”

    “30.Authors, editors and publishers all have ethical obligations with regard to the publication of the results of research. Authors have a duty to make publicly available the results of their research on human subjects and are accountable for the completeness and accuracy of their reports. They should adhere to accepted guidelines for ethical reporting. Negative and inconclusive as well as positive results should be published or otherwise made publicly available. Sources of funding, institutional affiliations and conflicts of interest should be declared in the publication. Reports of research not in accordance with the principles of this Declaration should not be accepted for publication.”

    “31.The physician may combine medical research with medical care only to the extent that the research is justified by its potential preventive, diagnostic or therapeutic value and if the physician has good reason to believe that participation in the research study will not adversely affect the health of the patients who serve as research subjects.”

    I would argue that these principles of the Declaration of Helsinki completely preclude clinical trials using homeopathy, chiropractic, energy healing, acupuncture, blood letting, cupping, coffee enemas, and a whole lot more CAM without sufficient animal and or basic research that shows reasonable scientific plausibility based on chemistry, biology, physiology and other scientific principles. Scientific plausibility is about data and science, not about “feeling it will work”.

    What constitutes “informed consent” is another issue. In the descriptions of the Gonzalez trial, it seems pretty clear that some of the patients were lied to about the plausibility of the Gonzalez leg of the trial, that there was clinical equipoise between the two legs. “Informed consent” does not mean “trick the patient into signing a piece of paper by lying to them”. You have to inform the patient and educate the patient until the patient understands the scientific rationale behind the trial and what the trial is trying to measure and accomplish. If there isn’t a scientific rationale behind the trial, that is going to be an impossible burden. It should be an impossible burden. Trials without a scientific rationale should not be done.

    Based on #30, I would further argue that the Cochrane Collaboration should not consider published research that did not follow the guidelines of the Declaration of Helsinki. That means there should be no Cochrane publications on homeopathy, chiropractic, energy healing, acupuncture because without a scientific basis, ethical trials on those treatment modalities can’t be done. They should explicitly state that until there is a scientific basis for prior plausibility that justifies doing clinical trials, that the Cochrane Collaboration will not consider clinical trials on implausible treatments such as homeopathy, acupuncture, and so on.

    There isn’t enough data for the Cochrane Collaboration to evaluate the “therapeutic effectiveness” of something like homeopathy because there are not enough trials with a large enough n which show a therapeutic effect. There is enough data for the Cochrane Collaboration to weigh in on the adherence to the Declaration of Helsinki of any given trial. They should have a category where trials are rejected because they don’t meet minimal ethical standards. Rather than say “we need more trials”, they should say “there needs to be more science demonstrating plausibility before ethical trials to investigate this treatment modality can be done”.

  5. S.C. former shruggie says:

    As a biology undergrad, I have to say this ongoing debate over EBM versus SBM is an education unto itself. I wish this had been part of my otherwise good classes. Thank you, Dr. Atwood.

    I also love the discussions of John Ionandis’s work. Truly, this has helped my understanding of statistics (and its possible situational weaknesses, and human cognitive bias, and experimental design) immensely.

    Thank you also to Dr. Simon, without whom there could be no long, detailed, protracted argument to learn from.

  6. windriven says:

    Simon Says: “Everyone deserves their day in court. ”

    Would Dr. Simon wish to participate in an RCT on treating acute appendicitis with lime gummy bears? It is my conjecture that the unique blend of citric acid, guar gum and various sugars in lime gummy bears will cure appendicitis without all that messy surgical stuff. What? NO? I don’t deserve my day in court? Why? Because the idea is absurd? Says who?

    So EBM will happily discount the windriven theory of appendicitis therapy on the basis that it is dumber than a bag of sand but will argue that homeopathy deserves further research?

    Hmmm. Case dismissed.

  7. wlondon says:

    Thanks Dr. Atwood for continuing to build the case for science-based medicine and for showing where many evidence-based medicine promoters go astray!

    Not everyone deserves a day in every court. When you lose a case, you may appeal, but the appeals court may rightly decide not to hear your case. Preposterous methods such as homeopathy have already had far too many days in the “court” of scientific investigation; they deserve no further “hearings.” It is simply unethical to conduct clinical trials on homeopathic treatments.

    I have one quibble. You wrote: “Compared to laboratory investigations, clinical trials are necessarily less powered…” Less powered means less likely to reject the null hypothesis. That’s a reason that a clinical trial might NOT produce evidence that a treatment has at least modest benefits. Lack of power is not a reason to expect clinical trial to make an ineffective treatment look good.

  8. Kim,

    Excellent article. By coincidence, I also wrote today (before seeing your post) about the intersection of psi research and the EBM vs SBM debate: http://theness.com/neurologicablog/?p=2701

    Critics of psi research within psychology have completely nailed the problems with EBM (even if they did not know it).

  9. @Jan Willem Nienhuys,

    Interesting stuff about the Nazis and homeopathy. Let me recommend this old post: Naturopathy and Liberal Politics: Strange Bedfellows

    Various simple tests of homeopathy have been done: they are disconfirming. Similia similibus curantur has been definitively disproved by a number of criteria, including that it was never demonstrated by Hahnemann in the first place and that the subsequent history of medicine has repeatedly refuted it. For a discussion of these issues, look here:

    Homeopathy and Evidence-Based Medicine: Back to the Future Part IV

    @deadalus:

    Right on.

    @wlondon:

    You are, of course, correct about “powered.” I should have written “smaller,” which is what I was thinking, the point being that chance deviations from whatever the truth may be will seem exaggerated without an opportunity for ‘regression to the mean’.

  10. @Steve N:

    Thanks. I guess great minds think alike. ;-)

    I’ll read your post ASAP.

  11. rork says:

    On whether research can help limit CAM expenditures, I was not completely convinced by Atwood’s anecdotes, since it’s time-series data without controls. That is, without that research, the situation could be even worse. The Gingko example might be one I’d point to – at least I hope the effect of the negative study was large.

    At the end is the teaser about a “legitimate Bayesian way” that I’ve been waiting on forever. Have these SBM people published anything on that yet? I very much liked the article, but feel compelled to worry about what devils are in the details, and we never really get there. Group decision theory was not well developed during my formative years, but that was long ago. The only solutions I can see involve selecting experts, and that will require inquisitors to select them. I hope for something more smashing.

  12. S.C. former shruggie says:

    @Daedalus2u

    Exactly.

    How do you get informed consent for treatments that often claim to cure literally everything and have no side effects of any kind ever? You can’t.

    How do magic panacea pass the IRB? Do they soft-sell the efficacy claims or side effects? Common claims that (insert magic) cures all and is risk-free ought to look suspicious.

  13. rork says:

    My snarky second paragraph should have admitted that we could instead try for a goal of just mapping priors to posteriors using data, and let reader supply the prior. In easy problems that can be done rather formally.

  14. JMB says:

    I should probably wait until part IV for this, but a thought came to mind after reading the article.

    Prior probabilities for an experiment do not have to rely on either previous conduct of the experiment, or on expert opinion (whether qualified by testing or credentials), but also on experimental evidence on individual steps in the chain of events that comprise the conditional probability. Therefore, in the case of homeopathy, the probability that any preparation administered to a subject will have an active ingredient can be used as an objectively obtained prior probability.

    One of the advantages of the Bayesian viewpoint is that it is easy to conceptualize the chain of events leading to an observation, and view each step in the chain as resulting in a posterior probability that becomes the prior probability for the next step in the chain. The breakdown of a clinical observation into a chain of events is how we connect basic science to clinical science. While the actual chain of events is enormous, all that we have to do is identify the weak link. If the weak link has a conditional probability of 1 in one trillion, and the maximum possible conditional probability approaches 1 (by definition), then an objective calculation of the probability of the weakest link gives us the upper bound of the a priori probability.

  15. JMB says:

    Sorry about sloppy editing.
    Change,
    “Prior probabilities for an experiment do not have to rely on either previous conduct of the experiment, or on expert opinion (whether qualified by testing or credentials), but also on experimental evidence on individual steps in the chain of events that comprise the conditional probability.”

    to,
    “Prior probabilities for an experiment may rely on either previous conduct of the experiment, or on expert opinion (whether qualified by testing or credentials), but also on experimental evidence on individual steps in the chain of events that comprise the conditional probability.”

    I guess my right and left brain don’t cooperate well.

  16. @rork:

    Here are some old posts discussing Bayesian inference:

    1. Prior Probability: The Dirty Little Secret of “Evidence-Based Alternative Medicine”
    http://www.sciencebasedmedicine.org/?p=48

    2. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued
    http://www.sciencebasedmedicine.org/?p=49

    3. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued Again
    http://www.sciencebasedmedicine.org/?p=55

    There are several ways to estimate priors, including as a broad range, as an arbitrary value, or by not estimating them at all. Your suggestion to let the reader provide the prior is one such way, and is a very good one. Consider Goodman’s argument for the Bayes Factor as an objective measure of evidence, which can then be used to calculate what the prior must be in order to arrive at a posterior of, say, 0.95 ;-) This exposes how P-values tend to overstate posteriors, as it were, and also forces readers to confront their own prior estimates. There ain’t no hiding behind the false objectivity of frequentist statistics, as currently practiced in EBM. Experts, schmexperts: We need to get everyone’s priors to come out of the closet.

    The ‘problem’ of assigning a prior is a straw man that is the eternal excuse for not reckoning with the fatal flaw of frequentist statistics: the fallacy of the transposed conditional. We need Bayes because it is the solution to the problem of inductive inference (inverse probability), which is how biomedical science proceeds—whether we like it or not.

    There’s more, of course, but I’ll conjure it up for the next post.

  17. daedalus2u says:

    SC, there do happen to be science based treatments that can “cure” just about everything and which don’t have adverse side effects. But that is not what this is about. ;)

    “Informed consent” means understood and agreed to. If someone does not understand something, then they cannot give informed consent. They are “incompetent” as far as the law and the idea of informed consent is concerned. When research involved individuals that are incompetent, the criteria for ethical behavior and non-exploitation become more rigorous.

    To understand something, one must have a basis for that understanding, one must be able to tie that understanding back to facts and logic. If you can’t tie an idea back to facts and logic, then you don’t understand it.

    One needn’t understand every last detail about the treatment and how it impacts physiology, but you can’t base a clinical trial on a hunch, or on a gut feeling.

    This requirement isn’t about “science”, it is about ethics. It is about treating the human beings that are being subjected to the treatment as participants.

    There is a “standard of care” in the ethics of clinical trials, and the Declaration of Helsinki is what lays it out. If you violate that, you have committed malpractice and should be sanctioned. You should be sanctioned criminally.

  18. Mark P says:

    “Everyone deserves their day in court. ”

    There would be no problem if it worked like a court. The loser in a court case pays for their error with a conviction.

    What Prof Simon is asking is that the state pay for errant party to defend themselves, regardless of the strength of their case, and then doesn’t punish them afterwards for being wrong.

    (In a civil case, where the decision is on the balance of probabilities, no lawyer would even start to defend something as hopeless as homeopathy, so I assume the analogy is to criminal proceedings.)

  19. tmac57 says:

    Windriven-I am on my 3rd day of your lime gummy bear treatment for acute appendicitis,and I have to tell you,I don’t feel so good;(
    Should I try the red ones instead? The clear ones are my favorite though.

  20. Charon says:

    is there a legitimate Bayesian way to incorporate information about scientific plausibility into a Cochrane Collaboration systematic overview

    I’m sure there are some good ways, yes. But some cases are really, really easy. Like homeopathy: its prior is basically zero. Oh, not exactly zero, but let’s say, p~<1e-30. Really. This is from the entire weight of all experiments in thermodynamics, statistical mechanics, Newtonian dynamics, quantum mechanics, and experiments in other branches physics that are consistent with these.

    Either homeopathy is wrong, or all of physics and chemistry is wrong. And we have hundreds of years or bazilions of experiments saying the odds are against homeopathy.

  21. daedalus2u says:

    The “day in court” is a terrible analogy. You show up in court with a nonsensical case like homeopathy and the lawyer bringing the case gets sanctioned by the court for bringing a frivolous case. The “Obama was born in Kenya and is a deep cover Marxist plant” has a higher likelihood than does homeopathy, acupuncture or energy healing. Obama actually was born, and there actually is a place called Kenya. It doesn’t violate the laws of physics for Obama to have been born in Kenya, it just happens that he wasn’t. Magic water? Magic pins and needles? Magic hand waving?

    No one is trying to stop homeopaths from doing research, they can do stuff in the lab all they want. It is treating human beings like laboratory petri dishes that is so unethical. Get some laboratory data that shows something and then they can talk with the reality based community.

  22. JMB says:

    @rork

    What is subjective about deriving a prior distribution of the hypothesis that homeopathy will have no more effect than placebo? The calculation of the number of patients receiving one molecule of active ingredient is not a subjective process. It is not subjective to state that any response observed in a patient not receiving an active ingredient must be labeled a placebo effect, and the probability of a patient receiving an active ingredient becomes the bounds of prior probability. Isn’t that using the principle of maximum entropy?

  23. Jan Willem Nienhuys says:

    I don’t think there is a really scientific Bayesian way of factoring in an a priori distribution for ‘the hypothesis that homeopathy is more than a placebo’.

    Bayesian arguments run as follows: there are a priori odds before an experiment. These odds (expressed as a ratio a:b, or even as a fraction when b is not equal to zero) represent a ratio of chances in the frequentist sense.

    Frequentist sense means something like in the long run you’ll find a certain fraction of events coming out in such and such a way. If a pharmaceutical company over the years finds that about 1 in 10,000 chemicals they invent ort isolate can be developed into a useful drug, then the a priori odds are that a chemical from their labs will be useful as drug are 1:9,999.

    Similarly one can speak about the a priori odds that an arbitrary selected woman from a certain age without a prior breast cancer diagnosis will have breast cancer.

    If we have a test with a certain fraction of known false positives and false negatives, then after the test result is in, we can calculate the a posteriori odds.
    The calculation is simple: multiply the a priori odds by a factor (if the test says yes, then the factor is true positive fraction / false positive fraction, otherwise false negative fraction / true negative fraction )

    This is all standard frequentist probability theory. It shows that when the a priori odds are extremely low, even a large factor (= a tiny number of false positives) won’t be of much use, because almost zero times big still might be zero.

    Applied to testing hypotheses, the p-value is roughly the false positive fraction, the probability that the test says so strongly yes, when you are actually dealing with nonsense.

    So far so good. But the ‘Bayesians’ propose to use some kind of intuitive plausability rather than a frequentist chance. Then it stops (in my opinion) being probability theory. There is no frequentist interpretaion of ‘the probability that homeopathy works’. Should we think of an unknown number of worlds, in some of which homeopathy works and in others not, and a supernatural agency throwing fair dice to select one of these worlds?

    Here is a more difficult example. The whole of physics pointed around 1880 to the possibilty that one could measure the speed of the earth with respect to the ether. So Michelson and Morley tried to measure it in 1887 and failed. Now what was the a priori probability for the result ‘earth speed = 0′ then? The fact that the earth had a measurable speed with respect to the light of stars (stellar aberration) was discovered in 1725 already.

    So the Bayesian approach can metaphorically explain why it is useless to test impossible things.

    The position of the homeopaths is roughly that sickness is a kind of spiritual process, a disturbance in the life force, and that the homeopathic manner of preparation (shaking things at every dilution step by hitting the bottle against a book bound in leather) will produce the means to annul this disturbance. Materialistic science is just simply missing the whole point, just like science didn’t know about microbes or genes or DNA or relativity theory or elementary particles 200 years ago. Similarly others think that prayer helps and so on. I don’t believe it, but I think it is impossible to assign any kind of ‘probability’ in the frequentist sense to what I believe or not believe in this respect.

  24. Jann Bellamy says:

    Edzard Ernst, M.D., has a new article out on NCCAM funding of dubious therapies:

    Ernst E, Posadzki P, An independent review of NCCAM-funded studies of chiropractic. Clin Rheumatol. 2011 Jan 5. [Epub ahead of print].

    Abstract
    “To promote an independent and critical evaluation of 11 randomised clinical trials (RCTs) of chiropractic funded by the National Centre for Complementary and Alternative Medicine (NCCAM). Electronic searches were conducted to identify all relevant RCTs. Key data were extracted and the risk of bias of each study was determined. Ten RCTs were included, mostly related to chiropractic spinal manipulation for musculoskeletal problems. Their quality was frequently questionable. Several RCTs failed to report adverse effects and the majority was not described in sufficient detail to allow replication. The criticism repeatedly aimed at NCCAM seems justified, as far as their RCTs of chiropractic is concerned. It seems questionable whether such research is worthwhile.”

  25. S.C. former shruggie says:

    @Daedalus2u

    What I was trying to say is that it can’t be informed consent if you’ve misinformed the patient. When I was a young Philosophy student, I took, on recommendation of friends, some sleep hormone and herbal CAM products for insomnia. Plant products and hormones aren’t a priori incapable of having medical effects. I tried to verify their safety/efficacy by my own online searches in academic journals – this led me to quackademic research touting (what I now believe were false-) positive outcomes, and what I now know were a mix of low impact or fake journals, some associated with pseudo-professional bodies.

    The products did not warn of negative side effects, and oh yes, they had negative side effects! The research either did not speak of possible complications or glibly touted safety in the abstract and discussion setions.

    So if in conducting CAM research, one misinforms the patient that “natural equals safe” or something to similar effect, that is not informed consent. In my own limited experience with CAM, this is a common ethical infringement.

  26. daedalus2u says:

    Before doing a clinical trial, there has to be positive prior plausibility that the treatment will be safe and effective. Framing the problem as there being no analysis that shows the CAM treatment is a placebo is misplaced. There needs to be positive plausibility that the treatment is better than a placebo; meaning that there is positive plausibility that the treatment is safe and positive plausibility that the treatment will be effective.

    In the absence of positive prior plausibility that the treatment will be safe and effective, every clinical trial is unethical.

    This is different than essentially every other type of research. Doing measurements on the speed of light has no ethical implications. There are many types of research in physics that are done without the expectation that something different than what is expected will be observed. The proton is thought to be stable. Current estimates of the lifetime are that it is greater than ~10^33 years. The charge on the proton and antiproton are thought to be identical but opposite in sign. They have only been measured to be equal to one part in 10^8. No one expects to find them to be different, but having an actual measurement is worth having.

  27. Ken Hamer says:

    Purloined from another website:

    [img]http://imgs.xkcd.com/comics/the_economic_argument.png[/img]

    http://xkcd.com/808/

  28. JMB says:

    Here is a link to a paper that discusses the problem of prior probabilities. Edwin T. Jaynes argues that estimates of priors can be obtained objectively. The method does not always work, but it does allow the use of information from different scientific disciplines. This paper discusses the theoretic foundation of the principle of maximum entropy, transformative groups, and the requirements of the precision of the estimate of prior to allow correct decisions after collecting new data. I would need a course in these subjects to make sure I understand this correctly, but to me it is a strategy to translate information from basic science into prior probabilities useful for clinical science (much like statistical mechanics relates quantum physics to chemistry and classical physics). I also like the convergence of probability theory, decision theory, and information theory.

    The bottom line is that we can eliminate all bias by eliminating all prior information. However, by eliminating all prior information, we reduce the efficiency of conduct of scientific investigation, and tend to conclude that we need more studies… rather than reaching a scientifically rational conclusion. This is an optimization problem, to reduce bias but retain useful information.

    If you don’t like math, don’t bother with it.

    http://bayes.wustl.edu/etj/articles/prior.pdf

  29. rork says:

    “The ‘problem’ of assigning a prior is a straw man that is the eternal excuse for not reckoning with the fatal flaw of frequentist statistics.”
    Sometimes true, but not when a missionary bayesian is complaining. When there is more than one person making a decision, there really is a problem agreeing about a prior. One can point to thousands of words and insinuate there was a solution to that problem somewhere hidden in them, but it would be better to repeat the kernel of the argument. Pointing out that some other guy has a problem, is not the same as giving a solution. Let the real straw man take a bow.

    Jan Willem Nienhuys: I don’t imagine many worlds – I just act as if I’m uncertain about this world. I know the next coin toss will either be heads or tails, but I don’t know which. Mr. Di Finetti never said my prior has to be “correct”, just that if I don’t use one, I will act like a fool (incoherent).

  30. When there is more than one person making a decision, there really is a problem agreeing about a prior.

    I suspect that we don’t really disagree, because there is no requirement that everyone agree on a prior. My point is that I want to know what those priors are: I want each person to specify a prior (or a range) and attempt to justify it. Alternatively, I want to see your previous proposal: if authors don’t specify priors, they should inform readers what prior(s) must be specified in order to conclude that a study provides some degree of evidence (none, weak, moderate, strong) for a hypothesis. Ideally I want to see both: authors offering priors with justifications as a standard part of papers; editorialists and readers discussing such priors—agreeing or disagreeing with authors—as a standard part of their responses to papers.

    @Jan Willem Nienhuys:

    I know what you mean, and I would never expect Bayesian inference to convince a true homeopathy believer. I do think, however, that if rational physicians and scientists were asked to assign such a prior, that whatever their answers may be, they would be more enlightening than what we have now. Most would agree with me that the prior is similar to what would be reasonably assigned to a trial of perpetual motion machines (and for largely the same reasons). Others, less versed in the claims of homeopathy (such as the pediatrician that I quoted in Part II of this series), might venture a prior as high as 0.1, but if so they would find that even that generous number, applied to past homeopathy trials, yields posteriors that are not compatible with specific efficacy for homeopathic preparations.

    Some would probably agree with what I think is your point: that the question itself is not scientific, and is thus pointless to submit to an attempt at scientific investigation—which would be fine with me, because that is the same recommendation that follows from assigning a prior of approximately 0, albeit for different reasons.

    Some advocate-researchers—Jonas, Wallach, etc.—might assign much higher priors, but they would need to justify those with some fast-talking. Consider that in the current, frequentist context, most of them pay lip service to the idea that homeopathy is implausible, but cite ‘objective‘ trial results, p-value style, to support their contention that there must be something to it (just as psi afficianados do). Who knows: maybe a few of them would recognize that this has been a mere reflection of the objectivity myth.

Comments are closed.