Mea culpa to the max. I completely forgot that today is my day to post on SBM, so I’m going to have to cheat a little. Here is a link to a recent article by yours truly that appeared on Virtual Mentor, an online ethics journal published by the AMA with major input from medical students. Note that I didn’t write the initial scenario; that was provided to me for my comments. The contents for the entire issue, titled “Complementary and Alternative Therapies—Medicine’s Response,” are here. Check out some of the other contributors (I was unaware of who they would be when I agreed to write my piece).
Archive for Medical Ethics
Some of my fellow Science-Based Medicine (SBM) bloggers and I have been wondering lately what’s up with The Atlantic. It used to be one of my favorite magazines, so much so that I subscribed to it for roughly 25 years (and before that I used to read my mother’s copy). In general I enjoyed its mix of politics, culture, science, and other topics. Unfortunately, my opinion changed back in the fall of 2009, when, on the rising crest of the H1N1 pandemic, The Atlantic published what can only be described as an terrible bit of journalism lionizing the “brave maverick doctor” Tom Jefferson of the Cochrane Collaboration. The article, written by Shannon Brownlee and Jeanne Lenzer, argued, in essence, that vaccinating against H1N1 at the time was a horrendous waste of time and effort because the vaccine didn’t work. So bad was the cherry picking of data and framing of the issue as a narrative that consisted primarily of the classic lazy journalistic device of a “lone maverick” against the entire medical establishment that it earned the lovely sarcasm of our very own Mark Crislip, who wrote a complete annotated rebuttal, while I referred to the methodology presented in the article as “methodolatry.” Even public health epidemiologist Revere (who is, alas, no longer blogging but in his day provided a very balanced, science-based perspective on vaccination for influenza, complete with its shortcomings) was most definitely not pleased.
I let my subscription to The Atlantic lapse and have not to this day renewed it.
Be that as it may, last year The Atlantic published an article that wasn’t nearly as bad as the H1N1 piece but was nonetheless pretty darned annoying to us at SBM. Entitled Lies, Damned Lies, and Medical Science, by David Freedman, it was an article lionizing John Ioannidis (whom I, too, greatly admire) while largely missing the point of his work, turning it into an argument for why we shouldn’t believe most medical science. Now, Freedman’s back again, this time with a much, much, much worse story in The Atlantic in the July/August 2011 issue under the heading “Ideas” and entitled The Triumph of New Age Medicine, complete with a picture of a doctor in a lab coat in the lotus position. It appears to be the logical follow up to Freedman’s article about Ioannidis in that Freedman apparently seems to think that, if we can’t trust medical science, then there’s no reason why we shouldn’t embrace medical pseudoscience.
Basically, the whole idea behind the article appears to be that, even if most of alternative medicine is quackery (which it is, by the way, as we’ve documented ad nauseam on this very blog), it’s making patients better because of placebo effects and because its practitioners take the time to talk to patients and doctors do not. In other words, Freedman’s thesis appears to be a massive “What’s the harm?” argument coupled with a false dichotomy; that is, if real doctors don’t have the time to listen to patients and provide the human touch, then let’s let the quacks do it. Tacked on to that bad idea is a massive argumentum ad populum portraying alternative medicine as the wave of the future, in contrast to what Freedman calls the “failure” of conventional medicine.
Let’s dig in, shall we? I’ll start with the article itself, after which I’ll examine a few of the responses. I’ll also note that our very own Steve Novella, who was interviewed for Freedman’s article, has written a response to Freedman’s article that is very much worth reading as well.
Is it ever ethical to provide a placebo treatment? What about when that placebo is homeopathy? Last month I blogged about the frequency of placebo prescribing by physicians. I admitted my personal discomfort, stating I’d refuse to dispense any prescription that would require me to deceive the patient. The discussion continued in the comments, where opinions seemed to range from (I’m paraphrasing) “autonomy, shmatonomy, placebos works” to the more critical who likened placebo use to “treating adults like children.” Harriet Hall noted, “We should have rules but we should be willing to break them when it would be kinder to the patient, and would do no harm.” And on reflection, Harriet’s perspective was one that I could see myself accepting should I be in a situation like the one she described. It’s far easier to be dogmatic when you don’t have a patient standing in front of you. But the comments led me to consider possible situations where a placebo might actually be the most desirable treatment option. If I find some, should I be as dogmatic about homeopathy as I am about other placebos?
Nicely, Kevin Smith, writing in the journal Bioethics, examines the ethics of placebos, based on an analysis of homeopathy. Homeopathy is the ultimate placebo in routine use — most remedies contain only sugar and water, lacking a single molecule of any potentially medicinal ingredient. Smith’s paper, Against Homeopathy — A Utilitarian Perspective, is sadly behind a paywall. So I’ll try to summarize his analysis, and add my perspective as a health care worker who regularly encounters homeopathy.
Three weeks ago, the anti-vaccine movement took a swing for the fences and, as usual, made a mighty whiff that produced a breeze easily felt in the bleachers. In brief, a crew of anti-vaccine lawyers headed by Mary Holland, co-author of Vaccine Epidemic: How Corporate Greed, Biased Science, and Coercive Government Threaten Our Human Rights, Our Health, and Our Children, published a highly touted (by Generation Rescue and other anti-vaccine groups, that is) “study” claiming to “prove” that the Vaccine Injury Compensation Program (VICP) had actually compensated children for autism. As is typical with such “studies” generated by the anti-vaccine movement, it was bad science, bad law, and just plain bad all around. The authors intentionally conflated “autism-like” symptoms with autism, trying to claim that children with neurological injury with “autism-like” symptoms actually have autism. Never mind that there are specific diagnostic criteria for autism and that, if the children actually had autism, many of them would have been given a diagnosis of autism. Never mind that what they were doing was akin to claiming that all patients with “Parkinson’s-like symptoms” have Parkinson’s disease. (Hint: They don’t.) Never mind that all they did was to demonstrate a prevalence of autism spectrum disorders among the VICP-compensated children that was clearly within the range of what would be anticipated if there were no relationship between vaccines and autism. Never mind all that. This was Holland’s big chance, but it went over like the proverbial lead balloon. No one bit, other than FOX News.
The study rapidly faded into the obscurity it so richly deserves, in spite of mighty efforts by Generation Rescue, SafeMinds, and the likes of Ginger Taylor to keep it alive and use it as a rallying point to persuade legislators to pass anti-vaccine-friendly legislation. You could feel the frustration in its backers as Holland’s study, into which groups like Generation Rescue had apparently poured their hopes of being vindicated, crashed and burned.
However, there’s one aspect of this study that I didn’t discuss. In fact, I thought of it as I read it, but I wasn’t sure. What I (and others) have noticed is that there was no statement in the article that approval had been obtained from the relevant institutional review boards (IRBs) to do human subjects research. For those not familiar with what an IRB is, an IRB is a committee that oversees all human subject research for an institution. It is the IRB’s responsibility to make sure that all studies are ethical in design and that they conform to all federal regulations. Basically, IRBs are charged with weighing the risks and benefits of proposed human subject research and making sure that
- risks are minimized and that the risk:benefit ratio, at least as well as it can be estimated, is very favorable;
- to minimize any pain, suffering or distress that might come about because of the experimental therapy; and
- to make sure that researchers obtain truly informed consent.
During the course of a study, regular reports must be made to the IRB, which can shut down any study in its institution if it has concerns about patient welfare.
Whether it’s acupuncture, homeopathy or the latest supplement, placebo effects can be difficult to distinguish from real effects. Today’s post sets aside the challenge of identifying placebo effects and look at how placebos are used in routine medical practice. I’ve been a pharmacist for almost 20 years, and have never seen a placebo in practice, where the patient was actively deceived by the physician and the pharmacist. So I was quite surprised to see some placebo usage figures cited by Tom Blackwell, writing in the National Post last week:
The practice is discouraged by major medical groups, considered unethical by many doctors and with uncertain benefit, but one in five Canadian physicians prescribes or hands out some kind of placebo to their often-unknowing patients, a new study suggests.
The article references a paper in the Canadian Journal of Psychiatry which, sadly, does not have much of a web presence. The article continues:
Imagine living 20 years spending 24 hours a day in a cage that tightly fits your body, not giving you room to stand up, stretch out, turn around, or move at all.
Imagine that twice a day during these years you would have a metal catheter inserted into a hole which has been cut into your abdomen, allowing the catheter to easily puncture your gall bladder, or maybe a long syringe inserted into your gall bladder, piercing through your skin again and again, by people who are not doctors.
Imagine becoming infected and cancerous because of this twice-daily physical invasion, and becoming neurotic due to your claustrophobic imprisonment.
Imagine having one or both of your hands cut off so someone can sell them for a lot of money.
Imagine you begin to chew at your hands, if you are lucky enough to have one or both left, due to your developing neuroticism, and to distract yourself from the pain you experience twice a day, every day, for your entire life.
This is reality for an estimated minimum of 12,000 bears across Asia.
— Sara Pegarella, JD
Currently, animal activists across China are up in arms because Gui Zhen Tang Pharmaceutical Corporation, a Fujian-based company that sells bear bile for use in Traditional Chinese Medicine (TCM), has tried to increase production through an initial public offering (IPO). The company is being accused of cruelty towards animals in the process of extracting their bile at an industrial scale. Bear bile, or Xiong Dan (熊胆), is an important ingredient in TCM.
Science-based medicine depends upon human experimentation. Scientists can do the most fantastic translational research in the world, starting with elegant hypotheses, tested through in vitro and biochemical experiments, after which they are tested in animals. They can understand disease mechanisms to the individual amino acid level in a protein or nucleotide in a DNA molecule. However, without human testing, they will never know if the end results of all that elegant science will actually do what it is intended to do and to make real human patients better. They will never know if the fruits of all that labor will actually cure disease. However, it is in human experimentation where the ethics of science most tend to clash with the mechanisms of science. We refer to “science-based medicine” (SBM) as “based” in science, but not science, largely because medicine can never be pure science. Science has resulted in amazing medical advances over the last century, but if there is one thing that we have learned it’s that, because clinical trials involve living, breathing, fellow human beings, what is the most scientifically rigorous trial design might not be the most ethical.
About a week ago, the AP reported that experiments and clinical trials that resemble the infamous Tuskegee syphilis study and the less well known, but recently revealed Guatemala syphilis experiment were far more common than we might like to admit. As I sat through talks about clinical trial results at the Society of Surgical Oncology meeting in San Antonio over the weekend, the revelations of the last week reminded me that the intersection between science and ethics in medicine can frequently be a very tough question indeed. In fact, in many of the discussions, questions of what could or could not be done based on ethics were frequently mentioned, such as whether it is ethically acceptable or possible to do certain followup trials to famous breast cancer clinical trials. Unfortunately, it was not so long ago that such questions were answered in ways that bring shame on the medical profession.
NB: This is a partial posting; I was up all night ‘on-call’ and too tired to continue. I’ll post the rest of the essay later…
This is the fourth and final part of a series-within-a-series* inspired by statistician Steve Simon. Professor Simon had challenged the view, held by several bloggers here at SBM, that Evidence-Based Medicine (EBM) has been mostly inadequate to the task of reaching definitive conclusions about highly implausible medical claims. In Part I, I reiterated a fundamental problem with EBM, reflected in its Levels of Evidence scheme, that although it correctly recognizes basic science and other pre-clinical evidence as insufficient bases for introducing novel treatments into practice, it fails to acknowledge that they are necessary bases. I explained the difference between “plausibility” and “knowing the mechanism.”
I showed, with several examples, that in the EBM lexicon the word “evidence” refers almost exclusively to the results of clinical trials: thus, when faced with equivocal or no clinical trials of some highly implausible claim, EBM practitioners typically declare that there is “not enough evidence” to either accept or reject the claim, and call for more trials—although in many cases there is abundant evidence, other than clinical trials, that conclusively refutes the claim. I rejected Prof. Simon’s assertion that we at SBM want to “give (EBM) a new label,” making the point that we only want it to live up to its current label by considering all the evidence. I doubted Prof. Simon’s contention that “people within EBM (are) working both formally and informally to replace the rigid hierarchy with something that places each research study in context.”
In Part II I responded to the widely held assertion, also held by Prof. Simon, that there is “societal value in testing (highly implausible) therapies that are in wide use.” I made it clear that I don’t oppose simple tests of basic claims, such as the Emily Rosa experiment, but I noted that EBM reviewers, including those employed by the Cochrane Collaboration, typically ignore such tests. I wrote that I oppose large efficacy trials and public funding of such trials. I argued that the popularity gambit has resulted in human subjects being exposed to dangerous and unethical trials, and I quoted language from ethics treatises specifically contradicting the assertion that popularity justifies such trials. Finally, I showed that the alleged popularity of most “CAM” methods—as irrelevant as it may be to the question of human studies ethics—has been greatly exaggerated.
This is the third post in this series*; please see Part II for a review. Part II offered several arguments against the assertion that it is a good idea to perform efficacy trials of medical claims that have been refuted by basic science or by other, pre-trial evidence. This post will add to those arguments, continuing to identify the inadequacies of the tools of Evidence-Based Medicine (EBM) as applied to such claims.
Prof. Simon Replies
Prior to the posting of Part II, statistician Steve Simon, whose views had been the impetus for this series, posted another article on his blog, responding to Part I of this series. He agreed with some of what both Dr. Gorski and I had written:
The blog post by Dr. Atwood points out a critical distinction between “biologically implausible” and “no known mechanism of action” and I must concede this point. There are certain therapies in CAM that take the claim of biological plausibility to an extreme. It’s not as if those therapies are just implausible. It is that those therapies must posit a mechanism that “would necessarily violate scientific principles that rest on far more solid ground than any number of equivocal, bias-and-error-prone clinical trials could hope to overturn.” Examples of such therapies are homeopathy, energy medicine, chiropractic subluxations, craniosacral rhythms, and coffee enemas.
The Science Based Medicine site would argue that randomized trials for these therapies are never justified. And it bothers Dr. Atwood when a systematic review from the Cochrane Collaboration states that no conclusions can be drawn about homeopathy as a treatment for asthma because of a lack of evidence from well conducted clinical trials. There’s plenty of evidence from basic physics and chemistry that can allow you to draw strong conclusions about whether homeopathy is an effective treatment for asthma. So the Cochrane Collaboration is ignoring this evidence, and worse still, is implicitly (and sometimes explicitly) calling for more research in this area.
On the other hand:
There are a host of issues worth discussing here, but let me limit myself for now to one very basic issue. Is any research justified for a therapy like homeopathy when basic physics and chemistry will provide more than enough evidence by itself to suggest that such research is futile(?) Worse still, the randomized trial is subject to numerous biases that can lead to erroneous conclusions.
I disagree for a variety of reasons.
This is the second post in a series* prompted by an essay by statistician Stephen Simon, who argued that Evidence-Based Medicine (EBM) is not lacking in the ways that we at Science-Based Medicine have argued. David Gorski responded here, and Prof. Simon responded to Dr. Gorski here. Between that response and the comments following Dr. Gorski’s post it became clear to me that a new round of discussion would be worth the effort.
Part I of this series provided ample evidence for EBM’s “scientific blind spot”: the EBM Levels of Evidence scheme and EBM’s most conspicuous exponents consistently fail to consider all of the evidence relevant to efficacy claims, choosing instead to rely almost exclusively on randomized, controlled trials (RCTs). The several quoted Cochrane abstracts, regarding homeopathy and Laetrile, suggest that in the EBM lexicon, “evidence” and “RCTs” are almost synonymous. Yet basic science or preliminary clinical studies provide evidence sufficient to refute some health claims (e.g., homeopathy and Laetrile), particularly those emanating from the social movement known by the euphemism “CAM.”
It’s remarkable to consider just how unremarkable that last sentence ought to be. EBM’s founders understood the proper role of the rigorous clinical trial: to be the final arbiter of any claim that had already demonstrated promise by all other criteria—basic science, animal studies, legitimate case series, small controlled trials, “expert opinion,” whatever (but not inexpert opinion). EBM’s founders knew that such pieces of evidence, promising though they may be, are insufficient because they “routinely lead to false positive conclusions about efficacy.” They must have assumed, even if they felt no need to articulate it, that claims lacking such promise were not part of the discussion. Nevertheless, the obvious point was somehow lost in the subsequent formalization of EBM methods, and seems to have been entirely forgotten just when it ought to have resurfaced: during the conception of the Center for Evidence-Based Medicine’s Introduction to Evidence-Based Complementary Medicine.
Thus, in 2000, the American Heart Journal (AHJ) could publish an unchallenged editorial arguing that Na2EDTA chelation “therapy” could not be ruled out as efficacious for atherosclerotic cardiovascular disease because it hadn’t yet been subjected to any large RCTs—never mind that there had been several small ones, and abundant additional evidence from basic science, case studies, and legal documents, all demonstrating that the treatment is both useless and dangerous. The well-powered RCT had somehow been transformed, for practical purposes, from the final arbiter of efficacy to the only arbiter. If preliminary evidence was no longer to have practical consequences, why bother with it at all? This was surely an example of what Prof. Simon calls “Poorly Implemented Evidence Based Medicine,” but one that was also implemented by the very EBM experts who ought to have recognized the fallacy.
There will be more evidence for these assertions as we proceed, but the main thrust of Part II is to begin to respond to this statement from Prof. Simon: “There is some societal value in testing therapies that are in wide use, even though there is no scientifically valid reason to believe that those therapies work.”