One of the overriding themes of the Science Based Medicine blog is to use rigorous science when evaluating any health claim – be it medical, dental, dietary, fitness, or any other assertion put forth with the intention of improving one’s health. Once the scientific evidence is evaluated as to efficacy, there are other criteria which must be taken into consideration, such as ease of administration, costs, possible adverse effects, and so on. Benefits have to be carefully weighed against risks to properly determine any appropriate course of action. For example, if a new pill is developed which is significantly better at , say, managing hypertension than existing medications, but it kills 10% of patients taking it, it obviously would not be the drug of choice. Conversely, if a proposed treatment, say homeopathy, is touted as being 100% safe with no side effects, but has absolutely zero benefits, it too would not be a recommended treatment. It’s a complicated and often ambiguous algorithm, and is imperfect due to the impossibility of attempting to quantify non-quantifiable values and qualities. (more…)
Posts Tagged Cochrane Reviews
Consider these statements:
…there is an evidence base for biofield therapies. (citing the Cochrane Review of Touch Therapies)
The larger issue is what constitutes “pseudoscience” and what information is worthy of dissemination to the public. Should the data from our well conducted, rigorous, randomized controlled trial [of ‘biofield healing’] be dismissed because the mechanisms are unknown or because some scientists do not believe in the specific therapy?…Premature rejection of findings from rigorous randomized controlled trials are as big a threat to science as the continuation of falsehoods based on belief. Thus, as clinicians and scientists, our highest duty to patients should be to investigate promising solutions with high benefit/risk ratios, not to act as gatekeepers of information based on personal opinion.
–Jain et al, quoted here
Touch therapies may have a modest effect in pain relief. More studies on HT and Reiki in relieving pain are needed. More studies including children are also required to evaluate the effect of touch on children.
Touch Therapies are so-called as it is believed that the practitioners have touched the clients’ energy ﬁeld.
It is believed this effect occurs by exerting energy to restore, energize, and balance the energy ﬁeld disturbances using hands-on or hands-off techniques (Eden 1993). The underlying concept is that sickness and disease arise from imbalances in the vital energy ﬁeld. However, the existence of the energy ﬁeld of the human body has not been proven scientiﬁcally and thus the effect of such therapies, which are believed to exert an effect on one’s energy ﬁeld, is controversial and lies in doubt.
—Cochrane Review of Touch Therapies, quoted here
Science is advanced by an open mind that seeks knowledge, while acknowledging its current limits. Science does not make assertions about what cannot be true, simply because evidence that it is true has not yet been generated. Science does not mistake absence of evidence for evidence of absence. Science itself is fluid.
When people became interested in alternative medicines, they asked me to help out at Harvard Medical School. I realized that in order to survive there, one had to become a scientist. So I became a scientist.
—Ted Kaptchuk, quoted here.
…It seems that the decision concerning acceptance of evidence (either in medicine or religion) ultimately reflects the beliefs of the person that exist before all arguments and observation.
—Ted Kaptchuk, quoted here.
Together they betray a misunderstanding of science that is common not only to “CAM” apologists, but to many academic medical researchers. Let me explain. (more…)
Several of us have written about how contemporary quacks have artfully pitched their wares to a higherbrow market than their predecessors were accustomed to, back in the day. Through clever packaging,* quacks today can reasonably hope to become professors at prestigious medical schools, to control and receive substantial grant money from the NIH, to preside over reviews for the Cochrane Collaboration, to be featured as guests and even as hosts on mainstream television networks and on PBS, to issue opinions in the name of the National Academy of Sciences, to be patronized by powerful politicians, and even to be chosen by U.S. presidents to chair influential government commissions.
The most successful pitch so far, and the one that the fattest quack-cats of all have apparently decided to bet the farm on, is “integrative medicine” (IM). Good call: the term avoids any direct mention of the only thing that distinguishes it from plain medicine. Its proponents, unsurprisingly, have increasingly come to understand that when they are asked to explain what IM is, it is prudent to leave some things to the imagination. They’re more likely to get a warm reception if they lead people to believe that IM has to do with reaching goals that almost everyone agrees are worthy: compassionate, affordable health care for all, for example.
In that vein, the two most consistent IM pitches in recent years—seen repeatedly in statements found in links from this post—are that IM is “preventive medicine” and that it involves “patient-centered care.” I demolished the “preventive” claim a couple of years ago, as did Drs. Lipson, Gorski, and probably others. Today I’ll explain why the “patient-centered care” claim is worse than fatuous.
This essay is the latest in the series indexed at the bottom.* It follows several (nos. 10-14) that responded to a critique by statistician Stephen Simon, who had taken issue with our asserting an important distinction between Science-Based Medicine (SBM) and Evidence-Based Medicine (EBM). (Dr. Gorski also posted a response to Dr. Simon’s critique). A quick-if-incomplete Review can be found here.
One of Dr. Simon’s points was this:
I am as harshly critical of the hierarchy of evidence as anyone. I see this as something that will self-correct over time, and I see people within EBM working both formally and informally to replace the rigid hierarchy with something that places each research study in context. I’m staying with EBM because I believe that people who practice EBM thoughtfully do consider mechanisms carefully. That includes the Cochrane Collaboration.
To which I responded:
We don’t see much evidence that people at the highest levels of EBM, eg, Sackett’s Center for EBM or Cochrane, are “working both formally and informally to replace the rigid hierarchy with something that places each research study in context.”
Well, perhaps I shouldn’t have been so quick to quip—or perhaps that was exactly what the doctor ordered, as will become clear—because on March 5th, nearly four months after writing those words, I received this email from Karianne Hammerstrøm, the Trials Search Coordinator and Managing Editor for The Campbell Collaboration, which lists Cochrane as one of its partners and which, together with the Norwegian Knowledge Centre for the Health Services, is a source of systematic reviews:
This is the third post in this series*; please see Part II for a review. Part II offered several arguments against the assertion that it is a good idea to perform efficacy trials of medical claims that have been refuted by basic science or by other, pre-trial evidence. This post will add to those arguments, continuing to identify the inadequacies of the tools of Evidence-Based Medicine (EBM) as applied to such claims.
Prof. Simon Replies
Prior to the posting of Part II, statistician Steve Simon, whose views had been the impetus for this series, posted another article on his blog, responding to Part I of this series. He agreed with some of what both Dr. Gorski and I had written:
The blog post by Dr. Atwood points out a critical distinction between “biologically implausible” and “no known mechanism of action” and I must concede this point. There are certain therapies in CAM that take the claim of biological plausibility to an extreme. It’s not as if those therapies are just implausible. It is that those therapies must posit a mechanism that “would necessarily violate scientific principles that rest on far more solid ground than any number of equivocal, bias-and-error-prone clinical trials could hope to overturn.” Examples of such therapies are homeopathy, energy medicine, chiropractic subluxations, craniosacral rhythms, and coffee enemas.
The Science Based Medicine site would argue that randomized trials for these therapies are never justified. And it bothers Dr. Atwood when a systematic review from the Cochrane Collaboration states that no conclusions can be drawn about homeopathy as a treatment for asthma because of a lack of evidence from well conducted clinical trials. There’s plenty of evidence from basic physics and chemistry that can allow you to draw strong conclusions about whether homeopathy is an effective treatment for asthma. So the Cochrane Collaboration is ignoring this evidence, and worse still, is implicitly (and sometimes explicitly) calling for more research in this area.
On the other hand:
There are a host of issues worth discussing here, but let me limit myself for now to one very basic issue. Is any research justified for a therapy like homeopathy when basic physics and chemistry will provide more than enough evidence by itself to suggest that such research is futile(?) Worse still, the randomized trial is subject to numerous biases that can lead to erroneous conclusions.
I disagree for a variety of reasons.
This is the second post in a series* prompted by an essay by statistician Stephen Simon, who argued that Evidence-Based Medicine (EBM) is not lacking in the ways that we at Science-Based Medicine have argued. David Gorski responded here, and Prof. Simon responded to Dr. Gorski here. Between that response and the comments following Dr. Gorski’s post it became clear to me that a new round of discussion would be worth the effort.
Part I of this series provided ample evidence for EBM’s “scientific blind spot”: the EBM Levels of Evidence scheme and EBM’s most conspicuous exponents consistently fail to consider all of the evidence relevant to efficacy claims, choosing instead to rely almost exclusively on randomized, controlled trials (RCTs). The several quoted Cochrane abstracts, regarding homeopathy and Laetrile, suggest that in the EBM lexicon, “evidence” and “RCTs” are almost synonymous. Yet basic science or preliminary clinical studies provide evidence sufficient to refute some health claims (e.g., homeopathy and Laetrile), particularly those emanating from the social movement known by the euphemism “CAM.”
It’s remarkable to consider just how unremarkable that last sentence ought to be. EBM’s founders understood the proper role of the rigorous clinical trial: to be the final arbiter of any claim that had already demonstrated promise by all other criteria—basic science, animal studies, legitimate case series, small controlled trials, “expert opinion,” whatever (but not inexpert opinion). EBM’s founders knew that such pieces of evidence, promising though they may be, are insufficient because they “routinely lead to false positive conclusions about efficacy.” They must have assumed, even if they felt no need to articulate it, that claims lacking such promise were not part of the discussion. Nevertheless, the obvious point was somehow lost in the subsequent formalization of EBM methods, and seems to have been entirely forgotten just when it ought to have resurfaced: during the conception of the Center for Evidence-Based Medicine’s Introduction to Evidence-Based Complementary Medicine.
Thus, in 2000, the American Heart Journal (AHJ) could publish an unchallenged editorial arguing that Na2EDTA chelation “therapy” could not be ruled out as efficacious for atherosclerotic cardiovascular disease because it hadn’t yet been subjected to any large RCTs—never mind that there had been several small ones, and abundant additional evidence from basic science, case studies, and legal documents, all demonstrating that the treatment is both useless and dangerous. The well-powered RCT had somehow been transformed, for practical purposes, from the final arbiter of efficacy to the only arbiter. If preliminary evidence was no longer to have practical consequences, why bother with it at all? This was surely an example of what Prof. Simon calls “Poorly Implemented Evidence Based Medicine,” but one that was also implemented by the very EBM experts who ought to have recognized the fallacy.
There will be more evidence for these assertions as we proceed, but the main thrust of Part II is to begin to respond to this statement from Prof. Simon: “There is some societal value in testing therapies that are in wide use, even though there is no scientifically valid reason to believe that those therapies work.”