Articles

Cochrane is Starting to ‘Get’ SBM!

This essay is the latest in the series indexed at the bottom.* It follows several (nos. 10-14) that responded to a critique by statistician Stephen Simon, who had taken issue with our asserting an important distinction between Science-Based Medicine (SBM) and Evidence-Based Medicine (EBM). (Dr. Gorski also posted a response to Dr. Simon’s critique). A quick-if-incomplete Review can be found here.

One of Dr. Simon’s points was this:

I am as harshly critical of the hierarchy of evidence as anyone. I see this as something that will self-correct over time, and I see people within EBM working both formally and informally to replace the rigid hierarchy with something that places each research study in context. I’m staying with EBM because I believe that people who practice EBM thoughtfully do consider mechanisms carefully. That includes the Cochrane Collaboration.

To which I responded:

We don’t see much evidence that people at the highest levels of EBM, eg, Sackett’s Center for EBM or Cochrane, are “working both formally and informally to replace the rigid hierarchy with something that places each research study in context.”

Hallafrickin’loo-ya

Well, perhaps I shouldn’t have been so quick to quip—or perhaps that was exactly what the doctor ordered, as will become clear—because on March 5th, nearly four months after writing those words, I received this email from Karianne Hammerstrøm, the Trials Search Coordinator and Managing Editor for The Campbell Collaboration, which lists Cochrane as one of its partners and which, together with the Norwegian Knowledge Centre for the Health Services, is a source of systematic reviews:

I just wanted you to let you know that I have been playing around with the same thoughts as you express in the EBM/SBM Redux series; having come across related problems in other reviews and finding the laetrile review by chance – as well as following the SBM blog (strangely enough I corresponded with Dr. Ernst concerning laetrile the day before you posted your correspondence with him – he must be getting tired of these e-mails!). For this reason a colleague and I wrote a letter to Cochrane, a letter which they have, to my surprise, accepted as an editorial and which will be published in mid March, I believe (The SBM blog is duly credited). Just wanted to let you know, and also that the response from Cochrane has been overwhelmingly positive.

Thanks for a very interesting, entertaining and educating blog!

Well, with no small sense of self-satisfaction I thanked her and forwarded her email to the other authors here, who got a kick out of it, but then I kinda forgot about it until trusty SBM commenter Peter Moran posted a link to the promised editorial. Lo and behold, woodja look at the very first sentence! Its citation is the post in which I’d dismissed Dr. Simon’s assertion about Cochrane placing research studies in context, and in which I reported my correspondence with Dr. Ernst regarding the Cochrane Laetrile review. But yes, I may have been a bit too facile in my dismissal of Prof. Simon’s contention, because it’s clear from the editorial and its ‘feedback’ that others, even among Cochrane reviewers themselves, have been similarly bothered. The problem, elsewhere dubbed EBM’s ‘scientific blind spot,’ nevertheless remains the rule rather than the exception. Two of the three feedback letters that are available as of this writing, moreover, don’t fully grasp the point.

Those of you who’ve been following this series know that I’ve already mentioned an exception to the EBM scientific blind spot at Cochrane, regarding its Laetrile review. It’s found not in the review itself, but in the form of Feedback from another Cochrane reviewer, who made arguments similar to my own. Today I’ll discuss another exception, the best that I’ve found so far, and for the second time today I’ll tip my hat to Scandinavians.

Intercessory Prayer

A 2009 revision of “Intercessory prayer for the alleviation of ill health” begins as follows:

This revised version of the review has been prepared in response to feedback and to reflect new methods in the conduct and presentation of Cochrane reviews.

There are interesting changes in this revision, some of them having to do with what we’ve been talking about. Let’s go right to the punch line. The first sentence is old hat; the second is nearly revolutionary for EBM:

Authors’ conclusions

These findings are equivocal and, although some of the results of individual studies suggest a positive effect of intercessory prayer, the majority do not and the evidence does not support a recommendation either in favour or against the use of intercessory prayer. We are not convinced that further trials of this intervention should be undertaken and would prefer to see any resources available for such a trial used to investigate other questions in health care.

Wow! Previous iterations of this review, spanning about a decade, had made the customary call for “further study.” What changed? Don’t get too excited, even though I’ve been goading you: what the authors left unstated were their reasons, intuitive though they might have been, for not being “convinced that further trials should be undertaken.” That, I suppose, would’ve been just too dicey.

Before discussing what actually changed, let me explain a couple of key features of this review. First, the authors take pains to acknowledge but, er, distance themselves from religious implications:

How the intervention might work

The mechanism(s) by which prayer might work is unknown and hypotheses about this will depend to a large extent on religious beliefs. This review seeks to answer the question of effect not mechanism and it does not seek to answer the question of whether any effects of prayer confirm or refute the existence of God…

…the results of this review will be of interest to those who are involved with the ‘debate about God’ – both religious believers and atheists – but these results cannot directly stand as ‘proof ’ or ‘disproof ’ of the existence of God…We do not, therefore, seek to pose or answer any questions about the existence of God with this review.

They observe that there are “several challenges” in performing such trials, naming “contamination” (people outside of the study likely to be praying for the same patients) and “blinding” (when the putative agent of the effect is an omniscient being). However, assert the authors,

…these are theological questions, and this review proceeds on scientific principles in that it is a widely held belief that intercessory prayer is beneficial for those who are unwell because God directs the outcome of those for whom prayers are offered differently from those for whom it is not. As noted above, we are not seeking to assess whether God is or is not the agent of action for prayer but, by using the same study designs used to test other interventions in healthcare we will assess the effects of the intervention. For this reason we also exclude from consideration such theological considerations as the injunction “Do not put the Lord your God to the test” (Deuteronomy 6:16) or questions as to whether God generally veils his presence from observation: in the words of the philosopher GF Hegel, “God does not offer himself for observation” (Hegel 2008).

Given their determination to measure “effect not mechanism” and to exclude theological considerations, it seems paradoxical that the authors chose to exclude “distant healing” (DH) studies that “may have included an element of prayer but did not specifically involve personal, focused, committed and organised intercessory prayer on behalf of another alone.” Thus they excluded one of the most famous, purportedly ‘positive’ studies in the field, which had recruited 40 “Healers” with

…an average of 17 years of experience and [who] had previously treated an average of 106 patients at a distance. Practitioners included healers from Christian, Jewish, Buddhist, Native American, and shamanic traditions as well as graduates of secular schools of bioenergetic and meditative healing.

Those ‘healers’ were told “to ‘direct an intention for health and well-being’ to the subject.” Thus, even though there was a religious theme to the choice of ‘healers,’ the imagined ‘mechanism’ of healing was decidedly psychokinetic—it was linear rather than angular, or non-stop rather than 1-stop, if you catch my drift. This was in keeping with the interests of the most important co-author, the late Elisabeth Targ, previously mentioned here.

That’s why the Cochrane authors excluded it and similar DH studies, but c’mon: an influential group of ‘CAM’ enthusiasts, including Targ, Larry Dossey, Victor Sierpina (Distinguished Teaching Professor at the University of Texas Medical School), Mehmet Oz (heh), Marilyn Schlitz (a former member of the NCCAM advisory council), naturopath Leanna Standish (also a former member of the NCCAM advisory council and the Director of Research at the Bastyr University AIDS Research Center), Andrew Weil, Kenneth Pelletier, James Gordon (Chairman of the White House Commission on Complementary and Alternative Medicine Policy), Jeanne Achterberg (who, together with Dossey and Gordon, chaired the “Mind-Body” panel of the 1992 “Workshop on Alternative Medicine,” whose report has debased medicine and medical research for nearly two decades), and many more fairly gush over the potential of ‘nonlocal healing.’ There’s a lotta research money wasted there, so it’s too bad that Cochrane hasn’t offered the same conclusion about the non-stop version of DH that it now has about the layover kind.

I also wonder if the reviewers would have included Targ’s study if that particular exclusion had not held, because Targ was later revealed to have rigged her study to yield “positive” results. She did this after the fact but before the publication, by “data dredging.” I’ve come to expect Cochrane reviewers to remain blissfully ignorant of such departures from polite methodology. Consider their ingenuous response to the Olszewer paper in the chelation review. In this “intercessory prayer” (IP) review are examples that needn’t require the reviewers to venture from the papers themselves. The review characterizes the most famous early ‘positive’ study, Byrd 1988, as double-blinded. That, presumably, follows from this statement in Byrd’s Methods section:

Patients were randomly assigned (using a computer-generated list) either to receive or not to receive intercessory prayer. The patients, the staff and doctors in the [coronary care] unit, and I remained “blinded”, throughout the study. As a precaution against biasing the study, the patients were not contacted again.

Well, OK, but consider this statement in the very next paragraph (emphasis added):

The patients’ first name, diagnosis, and general condition, along with pertinent updates in their condition, were given to the intercessors.

It seems that someone with access to that coronary care unit (CCU) musta not been blinded, and could easily have revealed subject allocation to the subjects themselves and to others. Just sayin’.

The review is ambivalent about the Byrd Score, a composite “severity” score that Byrd devised ostensibly to deal with the problem of multiple outcomes. Here are the results of those outcomes:

Byrd Table 2

Hmmm. The difference that jumps out at you is the incidence of congestive heart failure (CHF). All other differences reported to have achieved statistical significance—use of diuretics, intubation/ventilation, pneumonia, antibiotics, and even cardiopulmonary arrest—likely followed from CHF or from a common antecedent. Since such key outcomes as mortality and duration of CCU and hospital stay were no different between the two groups (surprising given the poor prognosis of CHF, especially 23 years ago), it seems reasonable to discount the CHF difference as either spurious or, as the Cochrane authors correctly acknowledged, due to chance in the context of multiple outcomes.

Not acknowledged by the reviewers were other curious findings in Byrd’s table: if 14 subjects in the control group suffered cardiopulmonary arrest—which involves a blood pressure of approximately zero—how could only 7 subjects in that group have experienced systolic blood pressures below 90? How could only 3 subjects in the IP group have suffered cardiopulmonary arrest—the final common pathway of dying, other than for the special category of ‘brain death’—when more than 4 times that many (13) actually died? Oh yeah, and dead people also have blood pressures below 90, except, apparently, several in each of the groups reported here. I dunno about you, but I’d like to think that any reasonably intelligent physician or scientist would look at that table for a couple of minutes and conclude, “Nope. Nuthin’ goin’ on there.”

The Cochrane reviewers included a study of “retroactive intercessory prayer.” Yup, it means what you’re afraid it means, your double-take notwithstanding. I am not making this up: Check it out. ;-)

All right, you must be thinking, so far I’ve shown you nothing but reasons to be more pessimistic than ever about Cochrane ‘CAM’ Reviews. Next they’ll be declaring that there is not enough evidence either in favour or against the use of exorcisms for demonic possessions, f’crissakes. But remember, the very same reviewers who went for time travel also politely called for a halt in intercessory prayer trials, so something must have swayed them.

Feedback

The answer seems to be found in two Feedback letters. The second is identified only as having been written by “Chris Jackson, anaesthetist.” I don’t know where he or she is from, but I’m guessing he’s what we in the U.S. call an ‘anesthesiologist.’ That’s what I am! Chris, you make me proud. This letter apparently jolted the Cochrane reviewers into noticing that a study they’d included for years, the infamous Cha Intercessory Prayer for IVF study, was, well, infamous enough to finally exclude (in 2009). Jackson also wrote that “RCTs of prayer are meaningless…There’s a lot of pseudoscience being done in this area,” which the reviewers, alas, didn’t buy.

The first Feedback letter is much longer, more adamant and less polite, and—what a kick!—written by other Cochrane reviewers. It begins with condemnation:

This review is riddled by serious flaws such as lack of critical appraisal of the included trials and findings, lack of a necessary discussion of the relevant sources of bias, and undue interference of theological reasoning.

It ends with a call for banishment:

This review does not live up to the scientific standards one can reasonably expect of a Cochrane review. The review as currently published should be withdrawn from the Cochrane Library, not least because it suggests that all scientific studies are meaningless, as we will never know whether one or more gods intervened in our carefully planned experiments.

The authors of this letter are identified as Karsten Juhl Jørgensen, Asbjørn Hrobjartsson and Peter C. Gøtzsche, from the Nordic Cochrane Centre, Rigshospitalet Dept. 3343, Copenhagen, Denmark. The cognoscenti among you might recognize Hrobjartsson and Gøtzsche as the authors of several reviews questioning the ‘power of the placebo,’ a topic that they’ve also reviewed for Cochrane.

I’m happy to report that I needn’t quote any more excerpts from that Feedback letter, even though you can’t read it without paying for the full review, because there’s an even better source. Jørgensen and colleagues turned their letter into a full article that you can read online in the aptly-named Journal of Negative Results in Biomedicine: “Divine Intervention? A Cochrane review on intercessory prayer gone beyond science and reason.” They make several of the points made above and elsewhere in this series (citing Bayes, for example), and many more, because unlike your semi-faithful blogger they were not too impatient to slog through the tedious religious formulations in the Cochrane IP review.

I suspect that it was this article and its associated Feedback letter that led the Cochrane IP reviewers to reverse their previous call for further studies, even if they failed to heed most of the arguments made by Jørgensen et al. Unfortunately, you would only know the last point if you had access to the full Cochrane review, where the exchange is found.

This post is already way too long, so I’ll end by telling you the most amusing example. By now I’m sure you either know or suspect that the “retroactive intercessory prayer” study included in the Cochrane IP review was a joke that the Cochrane reviewers didn’t get. The Danes explained this both in their article and in their Feedback letter, even providing a reference to a subsequent letter by the “retroactive” author in which he pretty much cops to the joke. The Cochrane reviewers, notwithstanding, responded:

Comments made about the Christmas issue of the BMJ and the Leibovici 2001 study in particular are not fully accurate. Several articles in the late December issues of the BMJ are written with humour and some in pure spoof. Most are not. They may be written with humour and have an odd perspective, but are, nevertheless, interesting and well thought out research. The Leibovici 2001 was not in jest. It is a rather serious paper, intended as a challenge.

Yikers.

…………

*The Prior Probability, Bayesian vs. Frequentist Inference, and EBM Series:

1. Homeopathy and Evidence-Based Medicine: Back to the Future Part V

2. Prior Probability: The Dirty Little Secret of “Evidence-Based Alternative Medicine”

3. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued

4. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued Again

5. Yes, Jacqueline: EBM ought to be Synonymous with SBM

6. The 2nd Yale Research Symposium on Complementary and Integrative Medicine. Part II

7. H. Pylori, Plausibility, and Greek Tragedy: the Quirky Case of Dr. John Lykoudis

8. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 1

9. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 2

10. Of SBM and EBM Redux. Part I: Does EBM Undervalue Basic Science and Overvalue RCTs?

11. Of SBM and EBM Redux. Part II: Is it a Good Idea to test Highly Implausible Health Claims?

12. Of SBM and EBM Redux. Part III: Parapsychology is the Role Model for “CAM” Research

13. Of SBM and EBM Redux. Part IV: More Cochrane and a little Bayes

14. Of SBM and EBM Redux. Part IV, Continued: More Cochrane and a little Bayes

15. Cochrane is Starting to ‘Get’ SBM!

16. What is Science? 

Posted in: Clinical Trials, Energy Medicine, Medical Academia, Science and Medicine

Leave a Comment (20) ↓

20 thoughts on “Cochrane is Starting to ‘Get’ SBM!

  1. daijiyobu says:

    Has EBM become so ‘branded’ and institutionalized that even if context and plausibility are elevated in stature — aka SBM or ‘all the evidence’ — that we’ll always see it labeled EBM?

    Will such a detail matter?

    Is “science” better than “evidence” in terms of the label, is I think what I’m getting at?

    Because I think there’s a certain lack-of-rigor to the term “evidence” but now when I see regionally accredited colleges and universities labeling acupuncture and naturopathy “science”, I’m wondering where rigor is at all.

    And accuracy.

    -r.c.

  2. rork says:

    In order to use a treatment on a person we want there to be good evidence that it is effective, and an idea of the size of the effect. For treatments that actually work fairly well the effect is easier to demonstrate. For those that are not as good it may be hard to demonstrate the effect. For those that do nothing, the sum of the evidence (that is high quality) should be unable to demonstrate an effect, or if by chance it hints at an effect it should be so small that we simply do not care.

    Is distinguishing whether a treatment has “truly” zero effect or some minuscule effect that we have not yet convincingly demonstrated important? Our actions will be not to use the treatment in either case I hope.

    Using classical statistics we never think of ourselves as proving the null hypothesis that effect is zero (or less than zero). We merely fail to disprove it. I think I almost always write papers as if I am not proving the null is true, since it is easy to not reject the null – just have very little or very crappy data – happens to me all the time. (Power calculation can help there, yes.)

    A bayesian wants to learn (and accepts uncertainty with great inner peace), and might put priors on effect being positive or not and manage to conclude that the posterior favors the null.

    Finally done with the set-up, now my point:
    I think we should ask “what is our goal?” I think there is perhaps more than one, and that may cause difficulties in our discussions. Perhaps there is really less dispute than we think.

    Not using unproven treatments doesn’t require us to prove the treatment doesn’t work very well. Without positive evidence we don’t do it. For this goal the decision is about patients.

    Say we all agree the treatment has not been shown to work very well. Do we now have a next goal of determining whether the treatment doesn’t work at all? The goal is now a decision about models. It seems more like what a physicist is trying to do.

    Maybe for a doc, the first goal suffices, and the second goal is only important for us to consider the course of future research, and that is my main point.

    I think EBM sometimes concludes “it has not been demonstrated to work well, yet”, and some of the writers here want to have that be the stronger “it has been shown not to work”. Maybe it could move to “the size of the effect is likely very small” though. (Sadly that sounds like an estimation problem now, but that might be fixable.)

    I’m asking for help clearing my thinking up. Thanks in advance.

  3. “Next they’ll be declaring that there is not enough evidence either in favour or against the use of exorcisms for demonic possessions.”

    Well, there certainly couldn’t be much evidence.

    I’m currently doing research to discover the dietary requirements of a snipe. It’s going slowly. First I have to catch the snipe.

    Glad SBM is making some progress in getting their points across.

  4. Zetetic says:

    Chris Jackson, anaesthetist ?

    May be this Aussie who has commented on something similar here:

    http://www.mja.com.au/public/issues/187_07_011007/matters_arising_fm.pdf

    Good on ya, mate!

  5. ConspicuousCarl says:

    rorkon 29 Apr 2011 at 12:49 pm
    I think EBM sometimes concludes “it has not been demonstrated to work well, yet”, and some of the writers here want to have that be the stronger “it has been shown not to work”. Maybe it could move to “the size of the effect is likely very small” though. (Sadly that sounds like an estimation problem now, but that might be fixable.)

    I think the bayesian point of view they are promoting goes something like, “due to existing experiments and knowledge, future testing is unlikely to produce positive results and won’t prove anything even if it does.”

    When you have something like homeopathy, for which there is no good reason to expect it to work and 100 years of bad or failed experiments, you don’t have to do the impossible and prove that it doesn’t work in order to declare it stupid to continue researching.

  6. Josie says:

    “I’m currently doing research to discover the dietary requirements of a snipe. It’s going slowly. First I have to catch the snipe.”

    Snipe references always make me giggle :D

    This should help in your research:
    http://www.iucnredlist.org/apps/redlist/details/150697/0

    And before anyone suggests a more mythical creature, such as a chupacabra (aka goatsucker) here is a head start on that too!

    http://birdsite.org/taxa/show/20

    my apologies, it’s Friday and I am full of silly comments!

  7. Josie “it’s Friday and I am full of silly comments!”

    better out than in, I say.

  8. ImperfectlyInformed says:

    I wonder if laetrile was the best example for that Cochrane editorial, particularly given that SBM is supposedly trying to stir more movement towards basic science, mechanism, etc. Hammerstrøm & Bjørnda mention laetrile, but give a misleading summary of the review by ignoring the basic science which the Cochrane reviewers discuss, including a recent in vitro study. Consider this excerpt from the article:

    Different rationales have been proposed to explain the alleged anti-cancer effect of Laetrile. Two similar theories claim that cancerous cells are deficient in rhodanese enzyme and have higher than normal levels of betaglucuronidase and betaglucosidase, enzymes responsible for the breaking down of Laetrile and amygdalin respectively (Krebs 1950; NCI website). Since rhodanese can convert cyanide into the relatively harmless compound thyocyanate (Bruneton 1999), when Laetrile is broken down by the betaglucuronidase, producing cyanide, this would affect cancer cells more than healthy ones. However, there is no experimental evidence to show that malignant and healthy cells differ in rhodanese enzymes (Gal 1952) or that betaglucosidase is contained in tumor tissues (Biaglow 1978). High levels of betaglucuronidase have been noted in tissue, blood serum and urine of cancer patients (Conchie 1959; Fishman 1955), so that drugs such as 1-mandelonitrileglucuronide may selectively release cyanide to tumor tissues (Biaglow 1978).

    Since the authors concluded that an RCT should be done, we can implicitly assume they thought the mechanism was plausible. Maybe they’re wrong, but it is much better to say why they are wrong than to say that the review is just bad science. It does a disservice to the researchers to ignore the specifics of science. Personally, I don’t even know what it means for 1-mandelonitrileglucuronide to selectively release cyanide to tumor cells based on betaglucuronidase in overall issue and blood serum – that does not seem very selective to me. But someone who is actually doing research in the field should read Biaglow 1978 to find out before commenting.

    Further, the Moertel trial has been criticized (albeit not by a medical scientist) in http://www.ispub.com/journal/the_internet_journal_of_alternative_medicine/volume_7_number_1_22/article/does_laetrile_work_another_look_at_the_mayo_clinic_study_moertel_et_al_1982.html and http://www.ispub.com/journal/the_internet_journal_of_alternative_medicine/volume_6_number_2_22/article/inaccurate_reporting_of_the_effects_of_laetrile_mistreatment_of_ellison_byar_and_newell_1978_in_professional_papers.html. Thus, there are some questions as to whether the Moertel trial was really a good falsification.

    I will admit that it is beyond me to comment on whether this all adds up to laetrile being plausible, but it certainly seems inconsistent to ask for discussion on mechanisms, but then ignore the discussions on the specifics and mechanisms when the science gets hard. Computing p-values and effect sizes is easy: it is very similar across all of science. Discussing biochemistry, different cancers and their responses to various chemicals are relatively complex and difficult, and often beyond the scope of a generalist writing a blog post. It’s also a fact that for many widely-used prescription drugs, there is no consensus on the precise mechanisms. The mechanisms often get teased out slowly from animal models with adjusted genes, and even then we don’t always know how much these animals match humans (see for example http://dx.doi.org/10.1006%2Frtph.2000.1399).

    We can easily make a value judgment, though: I would suggest that the laetrile Cochrane review should have called for an in vivo animal study before a human RCT. I would not say anything further, as I have not studied the issue closely enough.

  9. Chris says:

    I would suggest that the laetrile Cochrane review should have called for an in vivo animal study before a human RCT. I would not say anything further, as I have not studied the issue closely enough.

    Actually human studies were done. Some patients developed signs of cyanide poisoning. Remember there is a difference between killing cancer cells in a petri dish, and doing the same in a human body (without killing the human, a balancing act that has been the history of cancer chemotherapy).

Comments are closed.