Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 2

NB: If you haven’t yet read Part 1 of this blog, please do so now; Part 2 will not summarize it.

At the end of Part 1, I wrote:

We do not need formal statistics or a new, randomized trial with a larger sample size to justify dismissing the Gonzalez regimen.

In his editorial for the JCO, Mark Levine made a different argument:

Can it be concluded that [the] study proves that enzyme therapy is markedly inferior? On the basis of the study design, my answer is no. It is not possible to make a silk purse out of a sow’s ear.

That conclusion may be correct in the EBM sense, but it misses the crucial point of why the trial was (ostensibly) done: to determine, once and for all, whether there was anything to the near-miraculous claims that proponents had made for a highly implausible “detoxification” regimen for cancer of the pancreas. Gonzalez himself had admitted at the trial’s inception that nothing short of an outcome matching the hype would do:

DR. GONZALEZ: It’s set up as a survival study. We’re looking at survival.

SPEAKER: Do you have an idea of what you’re looking for?

DR. GONZALEZ: Well, Jeff [Jeffrey White, the director of the Office of Cancer Complementary and Alternative Medicine at the NCI—KA] and I were just talking a couple weeks ago. You know, to get any kind of data that would be beyond criticism is—-always be criticism, but at least three times.

You would want in the successful group to be three times — the median to be three times out from the lesser successful groups.

So, for example, if the average survival with chemo, which we suspect will be 5 months, you would want my therapy to be at least — the median survival to be at least 15, 16, 17 months, as it was in the pilot study.

We’re looking for a median survival three times out from the chemo group to be significant.

Recall that the median survival in the Gonzalez arm eventually turned out to be 4.3 months.

It would not surprise me if Dr. Levine, the author of the JCO editorial, were to take issue with what I’ve written so far. I would be remiss, for example, to leave readers with the impression that he called for another, larger trial, performed not as a cohort study but as a randomized, controlled trial, in order to prove whether (or not) “enzyme therapy is markedly inferior” to chemotherapy for cancer of the pancreas. He specifically did not make that recommendation; instead he argued, reasonably, “Given the scarcity of resources for cancer research, there are many more important questions to address.” I’ll come back to that statement, because although it hints that Dr. Levine has intuitively recognized the need for invoking prior probability when considering health research policy, he still appears confused by a strict, EBM perspective.

Human Studies Ethics: why Science Matters

Dr. Levine may also feel misrepresented by my emphases on the pseudoscientific nature of the Gonzalez regimen and the ethics of the trial. Near the beginning of his editorial, possibly to distance himself from those very issues, he wrote:

The goal of this discussion is not to debate the merits of conventional medicine versus CAM.

Let’s cut to the chase: “the merits of conventional medicine versus CAM,” at least insofar as the phrase refers to public perceptions or individual choices of patients or even physicians, are not the issues here. The overwhelming issues are ethical and scientific, and the two are not distinct: human studies ethics must be informed by science. I’ve explained this in some detail elsewhere in this long series. In summary, the Gonzalez trial repeatedly violated the Helsinki Declaration and other human studies treatises, including those established by the NIH itself and offered by NIH ethicists: it made Gonzalez, who is neither “a scientifically qualified person” nor “a clinically competent medical person,” responsible for human subjects; it did not minimize risks or enhance potential benefits, nor did “the potential benefits to individuals and knowledge gained for society … outweigh the risks” (it was a “trifling hypothesis”); it did not have “respect for enrolled subjects,” as evidenced by the horrible experience of at least one subject whose story was told by his friend, mathematician Susan Gurney;  it did not “conform to generally accepted scientific principles…based on a thorough knowledge of the scientific literature, other relevant sources of information, and on adequate laboratory and, where appropriate, animal experimentation”; instead, it justified itself by citing a flawed, non-consecutive case series offered by Gonzalez himself, which both Dr. Peter Moran and I deconstructed with only a bit of effort (see also here), and by the “popularity” fallacy.

Perhaps most disturbing, even to people who aren’t familiar with the literature of human studies ethics, is that the trial investigators almost certainly failed to provide prospective subjects with the knowledge necessary for informed consent. Such knowledge would have consisted of the cancer biology known at the time, which overwhelmingly predicted that Gonzalez’s regimen would have no beneficial effect on cancer of the pancreas; of an accurate, honest assessment of his case series; and of his history of peddling pseudoscientific claims and providing incompetent care. I’ve written “almost certain” because I haven’t seen more than a short excerpt possibly from the trial’s consent form, although I’ve tried to do so (hint: if anyone reading this has a copy please send it on). I can’t imagine, however, that the consent form included comprehensive information because if it had, no competent IRB would have approved the study and very few subjects would have enrolled.

The history told by Susan Gurney about her friend, moreover, strongly suggests that he had not been adequately informed:

I told him that I was going to attend the annual conference of the American Society of Clinical Oncology (ASCO) and would report on other options to him. Once at ASCO, I learned quickly and definitively that the Gonzalez protocol was a fraud; no mainstream doctors believed it was anything else and they were surprised that anyone with education would be on it…

By remaining neutral about the Gonzalez regimen, physicians at Columbia Presbyterian who place patients in this trial effectively preclude them from starting other options, because of the demands it places on patients and their families. If physicians believe they are truly being neutral by not fully explaining the Gonzalez protocol’s nature to cancer patients, it is they who are in denial.

Michael Specter, who wrote about Gonzalez for the New Yorker in 2001, reported similar faux neutrality from Karen Antman, then the chief of Columbia’s division of medical oncology and a past president of the American Society of Clinical Oncology, who would be a co-author of the eventual JCO report. First, she showed him the excerpt that I mentioned above, which Mr. Specter characterized as “instructions that Columbia gives patients interested in the Gonzalez study”:

Many Americans who develop advanced cancer for which standard treatments have little to offer, turn to alternative or complementary therapies….There is no current conventional medical support for the theories and assumptions underlying the use of Nutritional Therapy. The Columbia College of Physicians and Surgeons does not support its use except as part of a properly conducted clinical trial.

If that bland statement was all prospective subjects were told about “conventional medical support” for the Gonzalez regimen, it hardly constituted a responsible explanation. Mr. Specter himself, who is not a physician or scientist, sensed that as well:

I asked if it would be right to infer that she thought the trial wouldn’t work. She shook her head. I asked if she had an idea why it might work. She said no. Did she have any opinions at all about the potential of nutritional therapy or the Gonzalez regime? “I have lots of opinions,” she told me, “but none of them matter.”

Except that such opinions do matter, as any human study investigator is expected to know. Susan Gurney explained why they would have mattered to her friend:

He was an artist—a painter and a sculptor—and he had little scientific knowledge. When Dr. Chabot was neutral about the Gonzalez protocol, and when Dr. Antman said nothing adverse about it, my friend assumed that they must genuinely believe that the treatment could work.

The trial’s Principal Investigator, John Chabot, was also reticent to offer subjects accurate information about the Gonzalez regimen, as evidenced by a 1999 “Dear Prospective Patient” letter. These findings—that oncologists in general thought that “the Gonzalez protocol was a fraud,” but that the Columbia investigators failed to disclose that fact, and the reasons for it, to subjects—exposes another ethical violation that I’ve previously discussed: the trial lacked clinical equipoise.

Finally, it is clear, as discussed here and elsewhere in this series, that the underlying impetus for the trial was political, not scientific. Shouldn’t that have also been included in the consent form?

EBM Gets it Wrong

Keeping all that in mind, let’s revisit the quotation from Dr. Levine explaining the goal of his editorial, this time putting it into the context of the entire paragraph:

I am fortunate to have spent my entire academic career at McMaster University (Hamilton, Ontario, Canada), the birthplace of evidence-based medicine, and to have had the privilege of learning from colleagues such as David Sackett, MD, and Gord Guyatt, MD. The goal of this discussion is not to debate the merits of conventional medicine versus CAM. Rather, the objectives are to consider whether these two paths of medicine should be held to the same standards of evidence in terms of clinical and policy decision making and to determine whether a model that historically has been based on weak evidence can be reconciled with the new paradigm of the requirement for high-quality evidence.

Yes! A thousand times yes! They should be held to the same standards of evidence. But that means that they should be required to meet the same preliminary standards before being accepted for high-quality human trials, and those standards are about scientific evidence: evidence from basic science, evidence from the biology of the disease in question, evidence from animal studies, and evidence from whatever preliminary human case reports or uncontrolled trials there may be. The Gonzalez regimen failed all of those requirements, and the trial was thus scientifically unjustified and unethical from the start.

Dr. Levine, however, is misled by the limited EBM understanding of “evidence,” as is suggested by the last line of the paragraph. Later, he leaves no doubt:

This is one of the most challenging editorials I have had to write. Wearing the hat of a clinical epidemiologist, it is difficult not to find fundamental flaws in both of these trials. Not too long ago, I would have dismissed them out of hand. However, as a clinician, I recognize that many of my patients seek CAM therapies, and many are using them and afraid to tell me. I am troubled by the lack of evidence for many of these therapies and the costs that patients incur in using them. In the past, there was a reluctance to subject CAM therapies to the same standards of evaluation as those for conventional therapies. Should CAM therapies undergo the same type of rigorous evaluation to which conventional therapies are subjected? Absolutely! When authors submit clinical trials to JCO, they are instructed to follow specific guidelines. These guidelines should be the same whether the intervention is chemotherapy, radiation, an herbal remedy, or a psychosocial intervention.

Yes, there are specific guidelines for publishing reports in the JCO, and in the case of the Gonzalez report, they were violated. I wonder why Dr. Levine didn’t mention CONSORT? Clearly, the “type of rigorous evaluation” to which he is referring is a clinical trial. Otherwise, there would be little trouble subjecting scientifically incoherent methods (not ‘CAM therapies,’ which begs the question) to the same standards of evaluation as those for scientifically coherent methods (not ‘conventional,’ which has irrelevant social connotations). The beginning of such an evaluation is to judge how plausible the method might be, and the way to do that is by considering scientific evidence. This, Dr. Levine, is evidence.

Similarly, there was not merely a “lack of evidence” for the Gonzalez regimen; there was abundant evidence against it. As suggested above, Dr. Levine must understand this at some level: although he argued, incorrectly and irrelevantly, that the trial had failed to prove that “enzyme therapy is markedly inferior” to gemcitabine, he nevertheless opined that “Given the scarcity of resources for cancer research, there are many more important questions to address.” How could he assign even a qualitative measure of “importance” without considering prior probability?

Yet again he retreats to the safe haven of EBM, failing to recognize the ethical mischief wrought by ignoring science:

Chabot et al should be congratulated on their persistence and determination to compare pancreatic enzymes versus [sic] chemotherapy.

No! A thousand times no! Compare Dr. Levine’s statement with my own:

The authors of the report should be scrutinized by the OHRP and the NIH. They should be banned from being investigators in any NIH-sponsored trial for some finite period, and their involvement in the Gonzalez trial should become a noticeable smear on their reputations.

Here is Dr. Levine’s final paragraph. Rather than comment further, I’ve provided pertinent hyperlinks:

Recently, there has been a shift from single CAM modalities for cancer management to a more comprehensive approach called integrative oncology, which is “an evolving, evidence-based specialty that uses CAM therapies in concert with biomedical cancer treatments to enhance its efficacy, improve symptom control, alleviate patient distress and reduce suffering.” I am encouraged that the leaders of the Society of Integrative Oncology (Dundas, Ontario, Canada) have developed evidentiary levels to gauge the strength of evidence for CAM therapies. It is not surprising that these levels are based on the foundational principles of evidence-based medicine established by Sackett et al. I look forward to future well-designed clinical trials that provide high-quality evidence on how CAM therapies can improve the quality of life of our patients.

*The “Gonzalez Regimen” Series:

1. The Ethics of “CAM” Trials: Gonzo (Part I)

2. The Ethics of “CAM” Trials: Gonzo (Part II)

3. The Ethics of “CAM” Trials: Gonzo (Part III)

4. The Ethics of “CAM” Trials: Gonzo (Part IV)

5. The Ethics of “CAM” Trials: Gonzo (Part V)

6. The Ethics of “CAM” Trials: Gonzo (Part VI)

7. The “Gonzalez Trial” for Pancreatic Cancer: Outcome Revealed

8. “Gonzalez Regimen” for Cancer of the Pancreas: Even Worse than We Thought (Part I: Results)

9. “Gonzalez Regimen” for Cancer of the Pancreas: Even Worse than We Thought (Part II: Loose Ends)

10. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 1

11. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 2

The Prior Probability, Bayesian vs. Frequentist Inference, and EBM Series:

1. Homeopathy and Evidence-Based Medicine: Back to the Future Part V

2. Prior Probability: The Dirty Little Secret of “Evidence-Based Alternative Medicine”

3. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued

4. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued Again

5. Yes, Jacqueline: EBM ought to be Synonymous with SBM

6. The 2nd Yale Research Symposium on Complementary and Integrative Medicine. Part II

7. H. Pylori, Plausibility, and Greek Tragedy: the Quirky Case of Dr. John Lykoudis

8. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 1

9. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 2

10. Of SBM and EBM Redux. Part I: Does EBM Undervalue Basic Science and Overvalue RCTs?

11. Of SBM and EBM Redux. Part II: Is it a Good Idea to test Highly Implausible Health Claims?

12. Of SBM and EBM Redux. Part III: Parapsychology is the Role Model for “CAM” Research

13. Of SBM and EBM Redux. Part IV: More Cochrane and a little Bayes

14. Of SBM and EBM Redux. Part IV, Continued: More Cochrane and a little Bayes

15. Cochrane is Starting to ‘Get’ SBM!

16. What is Science? 

Posted in: Cancer, Clinical Trials, Health Fraud, Medical Academia, Medical Ethics, Politics and Regulation, Science and Medicine

Leave a Comment (30) ↓

30 thoughts on “Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 2

  1. daijiyobu says:

    Dr. A., speaking of less than scientifically-based medicine / CAM & oncology, and since you are a contributor to Naturowatch etc., I thought you might get a kick out of this [perhaps after you've pulled your jaw up off the floor, that is].

    I was browsing the web page of the ND schools’ consortia, the I came across AANMC’s interview of ND Rubin, who is “board-certified in naturopathic oncology [...] the founding president of the Oncology Association of Naturopathic Physicians”, who had this to say specifically about science and naturopathy

    (see ):

    “[question by] AANMC: what is the biggest challenge in your work? [...his answer:] one of the greatest challenges we face is the widespread public belief in the scientific method [...] we’re too reliant on the scientific method, and it stands in our way of forging ahead.”

    And we’re told “we have established the modern standard for our field. It creates legitimacy, validation and safety.”



  2. cervantes says:

    Apart from the ethical issues regarding trial participants — which could be addressed in principle, I think, even if it wasn’t done here — this largely comes down to an issue of resource allocation. People do use non-scientific therapies, they do pay for them, they do get probably false hope from them, they may forgo science based treatments in favor of them. If we believed that rigorous trials that showed non-efficacy could put a stop to this, or at least reduce it, there is a case to be made for doing them, even at the expense of funding for trials of more promising methods and publication space in journals.

    This seems to be the case the editorial is trying to make. However, we see that advocates of treatments that do not have scientific support are not deterred by such evidence as can plausibly be gathered. There is always some reason why it is inadequate and their imaginations are more powerful. So it does seem a waste of resources to test implausible treatments. However, it is ultimately an empirical question whether useless treatments can in fact be discouraged by evidence, and should not be approached ideologically.

  3. @cervantes,

    Peter Moran (website linked above), who also has a learned interest in this topic, has argued that there is likely to be a large group of “fence-sitters” among possible patients of practitioners such as Gonzalez, and that it is they who will likely benefit from such studies, even if true believers such as Gonzalez and Oderb reject disconfirming findings. This is, of course, entirely plausible. Certainly you are right that such a possibility is an empirical question.

    A justification based solely on predicted societal benefit but not on science can only go so far, however: it ends at the point at which consent forms are dishonest, clinical equipoise is violated, and individual research subjects are put at unnecessary risk. Human studies treatises are unanimous in the opinion that risks to individual subjects trump such societal considerations, even if the latter are legitimate. The end doesn’t justify the means. Look here (under “The Fallacy of Popularity”) and here for quotations and more discussion. Peruse the comments below those posts for more.

  4. JMB says:

    Another great article. Thank you.

    SBM should campaign that any study funded by NCCAM should include the following in the informed consent:

    “This study is not being funded by the traditional divisions of the NIH using a scientific basis for selection for funding. It is being funded by NCCAM, an agency created in the NIH by political deals, to relax the scientific criteria for funding. If you agree to participate in this trial, you are now relying on political promises for the ethics of the trial, and the success of the trial.”

    SBM is more ethical than EBM, because it attempts to maximize the use of information that is available from scientific observations. The approach of EBM is like that of an ostrich, burying it’s head in the sand of clinical trials, ignoring the blue sky above of basic science. It is true that sand is more solid than blue sky. However, the weight of the sand the ostrich buries its head under is less than the weight of the blue sky.

    Integrative medicine is a detour from the most direct path to the future of medicine. The future of medicine is for clinical decisions to become more science based, utilizing the increase in information that is becoming available to factor into the decision.

  5. nybgrus says:


    I would also add that in cases where the ethics are NOT violated, such negative CAM studies could and SHOULD be used to refuse dissemination of such treatment modalities. I.e. – insurance shouldn’t cover it and reputable hospitals and any place with intellectual honesty shouldn’t “integrate” it into their practice. Of course, we don’t live in the land of unicorns, pixie dust, and intellectual honesty so the reality is severely mitigated in regard to such negative studies.


  6. Ken Hamer says:

    Is there, within the halls of academia, a place where you can learn how to become “dumber?” A place to learn how to be stupid?

    So many of these “studies” seem so preposterously stupid that even I, with essentially no medical training whatsoever (first aid courses excepted), can clearly see that the proponent is an outright fraud. It is beyond credible belief that any of the academics noted above could *not* see this as an outright and foolish scam.

    I get the feeling I could get approval for a study to determine if a Leprechaun doing an Irish Jig on someone forehead is more effective than the same Leprechaun doing a Scottish Jig as a treatment for short fingers. Seems like so long as I filled out the forms correctly I’d get the go ahead.

    I just cannot fathom what process these supposedly educated people go through to justify proceeding with these potentially dangerous, thouroughly wasteful and profoundly stupid premises.

    Methinks the science based medicine community needs its own Christopher Hitchens, to publicly call morons “morons.”

  7. superdave says:

    What a sad an unfortunate story.

  8. daedalus2u says:

    You have to get the money out of it. If expenses related to CAM could not be charged to insurance, or to overhead, or written off, the practices would disappear overnight.

  9. trrll says:

    Clearly there is value in testing these “alternative” therapies, if only because the results may help to dissuade desperate patients from wasting money on nonsensical therapies that won’t help and might make them even worse.

    But that does not offer a justification for misleading subjects, even by omission. So a potential participant in such a trial would have to be told, in plain language, “There is no evidence from animal studies to support this treatment. There is no known biological mechanisms by which this treatment could benefit you. The developer of this treatment claims that it is therapeutic for your condition, but is able to offer only anecdotal testimonials, not scientific evidence, to support his claims. The consensus among almost all oncologists is that this kind of treatment will not help you, and could possibly harm you or make your condition worse.”

  10. trrll,

    You can’t assume that the results will dissuade anyone. If I am a desperate cancer patient and my oncologist tells me all of these things about the Gonzalez regimen and I go see Gonzalez anyway because I believe that my oncologist is in the grip of Big Pharma and is not telling me the whole truth, then adding “and by the way we tested it and it doesn’t work” is unlikely to change my mind.

    Testing improbable and risky interventions — even with informed consent — is only justifiable if there is reason to believe that the results will be useful.

    Do we have reason to believe that negative outcomes of trials of improbable and risky interventions affects anyone’s decision-making process? Is there evidence for or against?

  11. trrll says:

    And you can’t assume that the results won’t dissuade anybody, unless you can provide evidence demonstrating that. Not dissuading *anybody* is a pretty extreme claim to make, so you would have to provide a pretty high standard of evidence to justify such a claim.

    In the absence of such evidence, the “prior probability” certainly favors the view that providing higher quality information will affect people’s decision-making to some extent.

  12. trrll,

    I completely agree with not making assumptions either way, but when discussing irrational decision-making I really wouldn’t know where to start with “prior probability.” It would have to be pretty high to justify a trial of something as risky as the Gonzalez regimen.

    That’s why I was musing about the presence or absence of evidence to this effect. I really don’t know what the answer is, but I know it’s the subject of research under headings like “cognitive deficit model of irrational decision-making.” Under the cognitive deficit model, people who make irrational decisions simply lack information. Provide the information and all is well. The thing is, that works well as long as information is the limiting factor (which it very often is).

    What I suspect is that information is not the limiting factor when a compassionate oncologist tells a desperate patient the following, but the desperate patient seeks out the intervention anyway: “There is no evidence from animal studies to support this intervention. There are no known mechanisms by which this intervention could benefit you. The provider of this intervention claims that it is therapeutic for your condition but is able to offer only testimonials and no evidence to support the claim. The consensus among oncologists is that this intervention cannot help you and is likely to harm you.”

    What the oncologist is saying here is pretty clear. The oncologist appears to know what they are talking about. We’re talking specifically about a desperate patient who hears this explanation and ignores it. I don’t feel comfortable assuming that someone who ignores such clear input is operating on a pure information deficit. Maybe they are. I don’t know. But I wouldn’t ask a potential research subject to undergo a harmful intervention that couldn’t help them unless I was very sure that their sacrifice to prove that the intervention was harmful would result in saving other people.* And to be that sure, I would need evidence.

    That’s why I’m asking: is anyone familiar with the research on irrational medical decision-making and information deficits?

    * Which is moot, because if an intervention cannot benefit the research subject then the research is unethical even if other people could benefit. (Do I have that right?)

  13. vicki says:


    Yes, cutting off the NHS/insurance/etc. money would help, but it wouldn’t solve the problem. I suspect that’s especially true in the U.S., where someone can think “I can pay $25 copay and waste two hours in my doctor’s waiting room, and maybe get a prescription that might not be covered, or I can just go go buy $nostrum.” Because she’s spending money on either.

    Beyond that, if people have disposable income, they might figure that a noncovered pill or other treatment is a reasonable way to spend some of it, instead of on a trip to the movies or even a vacation somewhere.

    I’ve basically spent my bonus from work on a series of sessions with a personal trainer: she’s giving me exercises specific to what I want to achieve, which includes improving my balance and healing my knees, and providing valuable feedback and encouragement. That’s not covered by insurance. I think it’s a good use of my money.

    I also know that where I’m spending that money on something that has a demonstrable physical modality&mash;do this exercise regularly and it will strengthen my hamstrings, do that one and my balance improves—someone else may think it equally reasonable to spend theirs on something woo-ish.

    It’s a lot easier to argue “don’t spend the money on acupuncture, you need to buy gasoline so you can get to work” than “you need another pair of dressy shoes.” Sometimes the woo-sters are competing for the mortgage and grocery money, but sometimes it’s coming out of the “frivolous money” part of the budget.

  14. trrll says:

    Allison, before assuming that anybody who would consider such a treatment is totally irrational, and therefore unlikely to be influenced by actual evidence, you need to put yourself in the mindset of a patient (or the patient’s spouse or parent) facing an incurable, rapidly fatal disease. The reasoning is as follows:

    “OK, there is no convincing evidence that this treatment works, but so what? There’s no evidence that it doesn’t. The oncologist doesn’t think that it will, but he’s never tested it, he’s just guessing. And he has zilch to offer me in its place. Yes, it would be better to have animal studies, but it’s not as if I have the time to wait for them. And as for the lack of mechanism, there are many, many effective drugs that were in use for decades, if not centuries, before their mechanisms of action were understood. The guy claims to have had some successes; that’s not strong evidence, but it’s something. And it’s not as if I’ve got a whole lot to lose if it doesn’t work.”

    That’s not an irrational way to think. Which means that a real study showing that the treatment is indeed worse than useless would very likely influence some people’s decisions.

    Of course, whether you could actually recruit subjects for such a study if you were honest with them about the fact that you don’t expect it to work is another question. But I suspect that major impediment would not be the treatment, but rather the possibility of ending up in a placebo group.

  15. pmoran says:

    My obsevrations over some decades suggest that the negative studies for individual CAM cancer methods does have some effect upon their popularity, but without influencing overall CAM usage much. Other methods will simply be substituted. It is true that none are ever completley abandoned, although some may be nearly so for a period e.g. laetrile, shark cartilage.

    Still, do we always put the interests of science first, or sometimes that of the sick person before us? These perspectives are only partially confluent, as clearly shown by David Gorski’s recent post on clinical equipoise with a new drug.

    Good medical science is error-adverse, inclined to favour the negative until solidly proved otherwise.

    Our patients want medical practice to be more tolerant of doubt, wherever there is the even only potential for significant gain, little to lose, and a treatment method show promise.

    What should we do about LESS promising methods – the “alternatives” and CAM?

    It would be a bit extreme and foolish to be entertaining such questions as “whose equipoise (or ethical concerns, or perceptions re risk)? — the medical scientist’s, the influential alternative practitioner’s, or the patient’s?”.

    Nevertheless the recent upsurge of “alternatives” after a century of truly extraordinary medical progress suggests that we do have to keep on earning public trust. It is for this reason that we may need to occasionally show that current medical knowledge and scientific norms do have the predictive power that we claim for them in relation to innumerable dubious treatment methods. It may not be enough to merely be right in our own minds.

    I can’t prove that this is a worthwhile investment of resources, either, although I suspect that even the contributors to this list feel reassured whenever studies (such as the Gonzales one) confirm prior expectations.

  16. pmoran says:

    Apologies for poor grammar in the above. Hope the point comes through. (Got to be on the first tee in 30 minutes.)

  17. pmoran says:

    “Good science is error adverse”.

    That should be “error-averse”, of course.

  18. JMB says:

    It is for this reason that we may need to occasionally show that current medical knowledge and scientific norms do have the predictive power that we claim for them in relation to innumerable dubious treatment methods. It may not be enough to merely be right in our own minds.

    Isn’t it odd that SBM will have to prove itself by repeated experimentation, when the argument in EBM against reliability of basic science information is only a mental exercise of qualitative review of past history? I could take the same observations in a mental exercise and state that past medical breakthroughs (such as a bacterial agent in most gastric and duodenal ulcers) did not change basic science (it was already known that bacteria could survive harsh conditions, and caused ulcers), but indicated that our translation of basic science into clinical science was incorrect. Therefore, is it better to ignore basic science information, or improve our methods for translating basic science discoveries into a priori probabilities for clinical research?

  19. pmoran says:

    JMB: “I could take the same observations in a mental exercise and state that past medical breakthroughs (such as a bacterial agent in most gastric and duodenal ulcers) did not change basic science (it was already known that bacteria could survive harsh conditions, and caused ulcers), but indicated that our translation of basic science into clinical science was incorrect.” .

    Not sure about that example, JMB. These are not your usual simple infective ulcers, for eliminating gastric acid could heal them, whether H Pylori was treated or not.

    I am also not sure how the connection between an organism and ulcers could have been securely established prior to the advent of fibreoptic endoscopy a mere few decades ago. That for the first time allowed accurate diagnosis of ulcer and gastritis as well as permitting routine biopsy of the upper GIT.

    Eventually, I hope, we will not need to be referring to two kinds of supposedly evidence-based medicine, surely to the confusion of many not intimately involved in medicine.

    Some EBM enthusiasts merely need to be more aware of the limitations of the clinical trial process (not the logic) when it comes to some kinds of questions. As it happens, the more studies you perform, and the higher the quality that you strive for, EBM does seem to home in on the answer that prior plausibility (i.e. “ other evidence”) would predict.

  20. JMB says:


    Perhaps I should have referenced some previous SBM posts to make my argument more grounded.

    It appears to me that EBM tends to relegate basic science to a level of unreliable evidence based on past discoveries that caused paradigm shifts in the strategy of treatment of disease.

    From Dr. Novella’s article, “Plausibility in Science-Based Medicine” (I tend to view plausibility as an a priori probability some threshold above zero, and implausibility as a priori probability << .001, approaching zero),

    Essentially any claim that is the functional equivalent to saying “it’s magic” and would, by necessity, require the rewriting not only of our medical texts, but physics, chemistry, and biology, can reasonably be considered, not just unknown, but implausible.

    Dr. Katz and others would like us to believe that this category does not exist, based upon the premise that we do no yet understand enough science to make such judgments. They often invoke vague references to quantum mechanics or the counter-intuitive nature of subatomic physics or cosmology to make their point. But this is an anti-intellectual and unscientific approach – it denies existing knowledge.

    And this from Dr Atwoods article, “The 2nd Yale Research Symposium on Complementary and integrative Medicine. Part II”,

    He listed a few innovations in the recent history of medicine that were “heresy” when first proposed, suggesting that if plausibility had ruled the day they would never have emerged:

    1. H. pylori shown to be the cause of peptic ulcers;

    So I was referring to the arguments used by EBM demoting the importance of basic science because of the limitations of current scientific knowledge, the challenge to our notions of cause and effect by quantum physics and relativity, and how breakthroughs in treatment might have been ignored because of implausibility. That argument is what I was calling a mental exercise. It is an unproved hypothesis.

    It is possible to counter that argument with a mental exercise, the components of which have already been outlined in many articles on this site. As you were suggesting, SBM may be expected to counter the unproved hypothesis of EBM by experimental proof. That would require some years for experimental validation.

    I added my own opinion, that it is possible to analyze the process by which scientific relationships observed in the basic sciences can be translated into the clinical sciences. The same analysis can be performed in reverse, in which we relate clinical observations to basic science.

    The history of some errors in the process of relating basic science to clinical science does not disprove that we can use plausibility (or a priori probability above a threshold such as 0.001) to assess whether an experiment should be performed. We need to know the number of successes versus the number of failures before a valid argument exists for demoting basic science as unreliable.

    EBM uses anecdotal evidence to argue that basic sciences are so unreliable, that implausible treatments are worthy of experimentation. Seems like a catch 22 scenario to me.

  21. pmoran says:

    JMB, if I am interpreting you correctly, my comment would be I don’t see the EBM/SBM divide as being of much significance at all to the advancement of mainstream medical science.

    It is a sideshow, an artefact entirely generated by which, with its spectacular collection of equally unvalidated theories permanently sustained by placebo influences and other illusions is itself a cautionary example of what happens whenever medical science does relax its normally cautious and painstaking approach to medical truth.

    As I have said before, good science SHOULD err on the side of caution. Draw the line in the sand somewhere else and we might very rarely speed up the emergence of a truly useful treatment by a few years, but at the cost of far more numerous instances where resources are wasted and patients are harmed from methods that don’t do what they are supposed to. The illusions I refer to are powerful, and even of value to some, but someone, somewhere has to have a solid grip on what is going on.

    That said, what individual patients may choose to do at their own expense and risk and even with our (quite knowing) support is another story, and there is also a case for provisional use of promising treatments on preliminary evidence if the risk/benefit profile is sufficiently compelling and the outlook is otherwise grim. does not qualify for whole-hearted endorsement. It presents its methods as fully fledged treatments, despite a level of evidence that would not class them as experimental by normal standards.

    My sympathy for the occasional investigation of such methods is restricted to important claims such as cancer treatment. And the objective is to inform the public; there is no expectation that scientific knowledge will be advanced.

    The hypothesis being tested is the one that the patient is being cornered in asking, by their pathology, by their relatives and by the Internet: “do the scientists really know what they are talking about when they are dubious about such claims ?”

  22. JMB says:

    We do have different points of view. I tend to focus more on how EBM is different than SBM. I would agree that EBM and SBM should eventually converge to the experimentally valid science. However, I do think that SBM will more rapidly approach the asymptote of truth than EBM.

  23. trrll says:

    There are clearly levels of a priori improbability. The Gonzalez regimen is improbable, but if it were found to work, it would not require us to change our understanding of many other aspects of science. After all, it involves a variety of compounds and biologicals, any one of which might exert a therapeutic effect by perfectly conventional pharmacological principles. One might suppose that the prior probability would be similar to the likelihood that a random plant extract would have a therapeutic effect–and therapeutic compounds have indeed been identified by screening randomly selected plant extracts.

    At the other extreme is homeopathy, which if it were shown to work would require us to radically alter our understanding, not merely of pharmacology, but also of physics, because for something to act at extraordinarily low concentration (or for water structures to persist for longer than milliseconds), a very high energy interaction is required, and there is no known molecular interaction that could yield such a high energy interaction–so either our knowledge of molecular interactions or our knowledge of thermodynamics and chemical kinetics would have to be in error, not by a little bit, but by many, many orders of magnitude. And since these thermodynamic principles underly our understanding of all sorts of other things in biology, we would have to find new explanations for those things, too. Basically, a huge amount of biology and physics would collapse like a house of cards if homeopathy worked. The likelihood that all of this other knowledge could be grossly incorrect is so small that we can reasonably assign the prior probability of homeopathy a magnitude very close to zero–a huge amount of extraordinarily convincing evidence would have to be acquired, and every possible alternate explanation conclusively eliminated for homeopathy to be even remotely plausible.

    Somewhere in the middle, we have biological effects of low-intensity electromagnetic energy–power lines, cell phones, etc. Here, the problem is that the amount of energy in an EM photon is not sufficient to make any kind of long-lasting chemical change in the body. So it’s pretty unlikely, but we would not necessarily have to resolve our understanding of quantum theory if EM was shown convincingly to have biological effects, because all that is really necessary for biological effects to occur is that the body would have to sense the EM and trigger some persistent change, using not the energy of the particle, but rather stored biochemical energy, of which there is plenty. Now, this would probably require the existence of some kind of extremely sensitive sensory system, involving a huge degree of amplification, to accomplish this. There’s no particular reason why such a thing would evolve, and it would involve novel biological mechanisms, but it’s not as crazy as homeopathy.

  24. pmoran says:

    As you say, trrl. The truly puzzling thing is why so many people, including scientists, don’t seem to “get” the extreme implausibility of homeopathy.

    Is that a general susceptibility to weirdness? Are they similarly open-minded about ghosts, fairies, alien abductions, bizarre conspiracies?

  25. trrll says:

    Actually, I think that appreciating the virtual impossibility of homeopathy requires a fairly deep understanding of molecular interactions. Even people with some scientific training may not appreciate just how closely stability, energy of molecular interaction, and pharmacological potency are interlinked.

  26. “Even people with some scientific training may not appreciate just how closely stability, energy of molecular interaction, and pharmacological potency are interlinked.”

    Thanks trlll, I also wanted to point out that up until a couple years ago, I would have been surprised if someone told me that homeopathy was scientifically implausible. I don’t have enough science to suspect that one way or another.

    To the scientific laymen lots of real science sound very unlikely. The other day I was listening to an explanation of the relationship between space and time in Einstein’s theory. That seemed pretty unlikely to me. :)

    I also don’t thinks the general populace pays much attention to the dilutions. Some of the “homeopathic” remedies also are not diluted to the extreme, they contain active ingredients (Zicam and Arnica gels) so it’s confusing if you’re not paying that much attention. And a lot of folks aren’t paying much attention because the have other things on their mind.

    Sometime people on this blog are prone to forgetting that ferreting out woo is not the center of everyone’s life.

    pmoran – Regarding ghosts, fairies and weird conspiracies… I think you’re talking about different groups of people here engaged in different cognitive actions. I’m not sure it’s helpful to just lump everything together.

    Part of the issue is that knowledge has become very specialized. I wonder if any of the medical science people who believe that homeopathy is equivalent to fairies would be surprised when they found out that a basic belief that they had about car maintenance or driving was a complete fable.

  27. Agreed that reasonable individuals can be misinformed or not be paying that much attention. Absolutely. I am misinformed about so many things it’s not even funny, but if I knew what impossible things I believed I wouldn’t believe them, would I?

    If we use micheleinmichigan’s example, let’s say that the clerk at Canadian Tire told me that if I poured a half-cup of sugar into my gas tank that I would get double the milage and that the reason this wasn’t better known was that Big Oil doesn’t want us getting better milage.

    Fair enough. Sugar and oil are both hydrogen and carbon with a little oxygen thrown in. We’ve all heard the stories about cars running on french fry grease. It’s plausible. So I trimphantly tell my garage mechanic what my new plan is.

    My garage mechanic emits a shriek and blanches, then takes a couple of deep breaths and explains why this is a Very Bad Idea. Why it can’t work. The harm it can do. Etc.

    I might conclude that my mechanic seems to know what she’s talking about and wait a little on the sugar in the gas tank treatment.

    Alternatively, I might take her explanation as proof that she is hopelessly in the grip of Big Oil and never darken her door again.

    If my thinking takes the “alternative” route, will I really backtrack if my mechanic adds, “… and studies prove that oil in the gas tank hurts and doesn’t help”? Or are my mental processes such that I would simply conclude that the studies must have been biased and funded by Big Oil?

    I don’t know the answer to that question. Maybe study results are powerful enough to overcome conspiracy thinking. Maybe they aren’t.

  28. RE my moderated comment above:

    My concern is that if we don’t have evidence that study results are convincing to conspiracy theorists, then is it ethical to subject people to harmful studies just for the sake of the ability to say “we did a study”?

    If we know that study results are convincing to conspiracy theorists, then there could be an ethical case made… except that I think the Helsinki document says no, because the experimental subject will be harmed and will not benefit.

  29. daedalus2u says:

    Alison, no, it is not ethical.

  30. … Rrr. I keep leaving out bits of my thought process, such as it is.

    Case 1: My mechanic explains why putting sugar in the gas tank is a bad idea, based on what she knows about gas tanks, sugar and cars. She is able to answer my questions and demonstrate to my satisfaction that she knows what she’s talking about. I give up my sugar in the gas tank plan. No specific “study” is necessary.

    Case 2: Neither my mechanic nor anyone else is able to convince me of the unwisdom of my sugar-in-the-gas-tank plan because anyone who doubts the plan is a priori either lying or a gullible victim. A study won’t help.

    In case 1, no study is required; in case 2, it doesn’t matter anyway.

    Note that the situation we’re talking about involves both:
    — a highly improbable intervention which carries a high risk of harm;
    — a fully-informed individual who refuses to be talked out of the intervention.

    We aren’t just talking about someone becoming interested in something they saw on the internet. We’re talking about the specific situation where their oncologist has done their darndest to talk them out of it and gotten nowhere. Most people would let their oncologist talk them out of it. They don’t need a study. We’re discussing whether it’s reasonable to assume that “a study” presented by the oncologist would be convincing when nothing else has been. I don’t think we can make that assumption (I think we would need evidence before proceeding). (Actually what I really think is that it’s moot because it would be unethical anyway. But just for the sake of argument.)

Comments are closed.