Blogger’s note: This blog, which is rough going in places, will be presented in either 2 or 3 parts (I won’t know which until next week). I’ll post a part each week until it is complete, but due to overwhelming popular demand I promise to maintain the every-other-week posting of the far more amusing Weekly Waluation of the Weasel Words of Woo/2.
On Feb. 25, 2008, the federal Office for Human Research Protections (OHRP) cited Columbia University Medical Center (CUMC) for violating Title 45, Part 46 of the Code of Federal Regulations: Protection of Human Subjects (45CFR§46). The violations involved Columbia’s administration of the NIH-sponsored trial of the bizarre “Gonzalez Regimen” for treating cancer of the pancreas.† The OHRP’s determination letter to Steven Shea, MD, the Director of the Division of General Medicine and Senior Vice-Dean at CUMC, cited ethical problems of a serious kind:
We determine that the informed consent for the 40 of 62 subjects referenced by CUMC was not documented prior to the start of research activities, nor was the requirement for documentation waived by the CUMC IRB for subjects in this study.
It was the second time that the OHRP had cited Columbia for its dubious management of the “Gonzalez” trial. The first occurred in Dec. 2002, after investigators had determined that the trial’s consent form “did not list the risk of death from coffee enemas.” The OHRP listed several other violations at that time, but “redacted” them from the letter that it made available to the public. (more…)
You Can’t Foo’ Stu with Woo!
A Spitzerian (“pointed”) analysis
Last week’s inaugural game elicited several amusing and penetrating analyses, including that of the hands-down Gold Medal Winner, Stu. His was the first entry, introduced in a concise and alliterative imperative, and was both hilarious and timely. It implied most of the points discussed by others. This distinctive combination has moved me to grant Stu a legacy here at the W^5. In the future there may be, undoubtedly no more than once in a very long while, entries that live up to the Soaring Standard of Stu®. If so, they will be Duly Acknowledged. (more…)
I promised readers the “Advanced Course” for this week, which undoubtedly has you shaking in your boots. Fear not: you’ve already had a taste of advanced, subtle, misleading “CAM” language, and most of you probably “got” it. That was R. Barker Bausell’s analysis of how homeopathy is “hypothesized to work.” In the interest of civility, let me reiterate that I don’t think of Bausell as a horrible person or an ignorant boor for having written that statement. Rather, I think of him as having been so steeped in the de rigueur “CAM” language distortions of the 1990s that he is largely unaware of their insidious power. I suspect too that he, like most of us who grew up when schools no longer stressed the rigors of English composition, has an underdeveloped sense of the relation between the craft of writing and the integrity of its content. That doesn’t excuse him from writing honest prose, of course.
Last week’s post cited blatant language distortions of “CAM”—euphemisms, slogans, and outright falsehoods—and some that were more subtle: question-begging, misrepresentation, and derogation. It would require a semester’s worth of seminars to delve into the overlapping categories of misleading “CAM” language, but here we can consider a few. Then, perhaps, we’ll engage in an amusing diversion—more about that at the end of this post. (more…)
The Best Policy
From time to time I have been reiterating that correct use of the language has much to do with logic; I should add that it entails also honesty. I use the word “honesty” in its broadest sense…
Concision is honesty, honesty concision—that’s one thing you need to know.
—John Simon. Paradigms Lost: Reflections on Literacy and its Decline. New York, NY: Clarkson N. Potter, Inc.;1980. pp. 48, 52
In 1983, a naturopath in Alberta inserted balloons into the nostrils of a 20 month-old girl and inflated them. The child died of asphyxiation. Subsequently, a judge described the treatment—dubbed “bilateral nasal specific” by the chiropractor who had invented it—as “outright quackery.”  Fast-forward 15 years: a woman presented to the otolaryngology clinic at the University of Washington in Seattle “complaining of severe midface pain and epistaxis” (nosebleed). She had suffered nasal septal fractures caused by a similar treatment, by then renamed “NeuroCranial Restructuring” (NCR). In their case report, the surgeons who had treated the woman at U. Wash discussed the claims of NCR and explained that the relevant anatomy predicts that it is implausible and risky. They also reported that it is expensive: “$2000 to $4800 for a standard course (of 4 treatments).” They concluded:
This case report of a complication after a CAM procedure called NCR highlights the wide range of treatment options available to patients. It is important for otolaryngologists to be aware of the spectrum of CAM therapies that patients may pursue and be aware of potential complications from these procedures.
An accompanying editorial used similar language.
How is it that in 1983 a judge could offer a concise summary of the essence of such a method, whereas scarcely a generation later 5 highly-trained medical doctors, even after presenting the sordid facts, could only obscure it with bland euphemism? (more…)
After the previous posting on the Bayesian approach to clinical trial data, several new comments made it clear to me that more needed to be said. This posting addresses those comments and adds a few more observations regarding the unfortunate consequences of EBM’s neglect of prior probability as it applies to “complementary and alternative medicine” (“CAM”).†
The “Galileo Gambit” and the Statistics Gambit
Reader durvit wrote:
A very interesting example, for a number of people, might be estimating the prior probability for Marshall and Warren’s early work on Helicobacter pylori and its impact on gastroduodenal management. I frequently have Marshall quoted to me as a variation on the Galileo gambit, so establishing whether he and Warren would have been helped or hindered by Bayesian techniques would be useful.
This suggestion raises a couple of issues. First, the “Galileo gambit” regarding Marshall and Warren’s discovery is a straw man (as durvit seems to have surmised). (more…)
This is an addendum to my previous entry on Bayesian statistics for clinical research.† After that posting, a few comments made it clear that I needed to add some words about estimating prior probabilities of therapeutic hypotheses. This is a huge topic that I will discuss briefly. In that, happily, I am abetted by my own ignorance. Thus I apologize in advance for simplistic or incomplete explanations. Also, when I mention misconceptions about either Bayesian or “frequentist” statistics, I am not doing so with particular readers in mind, even if certain comments may have triggered my thinking. I am quite willing to give readers credit for more insight into these issues than might be apparent from my own comments, which reflect common, initial difficulties in digesting the differences between the two inferential approaches. Those include my own difficulties, after years of assuming that the “frequentist” approach was both comprehensive and rational—while I had only a cursory understanding of it. That, I imagine, placed me well within two standard deviations of the mean level of statistical knowledge held by physicians in general.
This is actually the second entry in this series;† the first was Part V of the Homeopathy and Evidence-Based Medicine series, which began the discussion of why Evidence-Based Medicine (EBM) is not up to the task of evaluating highly implausible claims. That discussion made the point that EBM favors equivocal clinical trial data over basic science, even if the latter is both firmly established and refutes the clinical claim. It suggested that this failure in calculus is not an indictment of EBM’s originators, but rather was an understandable lapse on their part: it never occurred to them, even as recently as 1990, that EBM would soon be asked to judge contests pitting low powered, bias-prone clinical investigations and reviews against facts of nature elucidated by voluminous and rigorous experimentation. Thus although EBM correctly recognizes that basic science is an insufficient basis for determining the safety and effectiveness of a new medical treatment, it overlooks its necessary place in that exercise.
This entry develops the argument in a more formal way. In so doing it advocates a solution to the problem that has been offered by several others, but so far without real success: the adoption of Bayesian inference for evaluating clinical trial data.
Homeopathy and Science: Discussion, Summary and Conclusions
I was not surprised by a couple of the dissenting comments after Part IV of this blog. One writer worried that I had neglected, presumably for nefarious reasons, to cite replications of Benveniste’s results; another cited several examples of “positive” homeopathy studies that I had failed to mention. I answered some of those points here. I am fully aware of such “positive” reports, including those seeming to support Benveniste. I didn’t cite them, but not in some futile hope of concealing their existence from the watchful eyes of the readership. I also didn’t cite several “negative” reports, including an independent, disconfirming report of one of the claims of David Reilly, whose words began this series,* and the most recent of several reviews (referenced here) to conclude that “the clinical effects of homoeopathy are placebo effects.” I didn’t cite those reports for the same reasons that I didn’t cite the “positive” studies: they are mere footnotes to the overwhelming evidence against homeopathy.
To explain why, it will be necessary to discuss some of the strengths and weaknesses of the project known as “Evidence-Based Medicine.”
Homeopathy and Science
This week’s entry† is a summary of some of the tests of homeopathy. It is a necessary prelude to a discussion of how homeopaths and their apologists promote the method. Several tenets of homeopathy lend themselves to tests. The doctrine of similia similibus curantur (“like cures like”) was tested by Hahnemann himself, as introduced in Part I of this blog. It is a special case that will be discussed further below. Hahnemann’s second doctrine, “infinitesimals,” suggests laboratory, animal, and clinical studies looking for specific effects of homeopathic preparations.
“Provings,” also called “homeopathic pathogenic trials,” suggest testing “provers” for the ability to distinguish between homeopathic preparations and placebos, and suggest asking homeopaths to identify specific remedies solely by the “symptoms” they elicit in “provers.” The homeopathic interview and prescribing scheme, gathering copious “symptoms” and matching them to the appropriate “remedy” in the Materia Medica, suggests testing homeopaths for consistency in symptom interpretations and prescriptions. The clinical practice suggests outcome studies, both of individual “conditions” (with the caveat that, strictly speaking, homeopathy does not recognize disease categories—only “symptom” complexes) and of the practice as a whole.
Several of these categories overlap. Several have been tested: the results have overwhelmingly failed to confirm homeopathy’s claims. I will mention a few of the more conspicuous examples.
Part IV of the ongoing Homeopathy series will have to wait a day or two, because it is superceded by a recent, comment-worthy publication. Nevertheless, “H series” fans will find here a bit of grist for that mill, too.
An important role for this blog is to discuss problems of interpreting data from clinical studies. Academic medicine has committed itself, on the whole, to scientific rigor—to the extent that this is possible in messy, clinical (especially human) trials. Several tools have been proposed, and to a varying extent used, to enhance the rigor of clinical research and the reporting of clinical research. One of those tools is the registering of clinical trials prior to recruiting subjects. Registration would stipulate a trial’s a priori hypothesis(es), design, planned endpoints, and planned statistical methods, among other things. This would guard against several problems: publication bias—the tendency for some trials, usually “negative” ones, to go unreported; selective reporting of the results of a trial, if some are pleasing but others are not; and post hoc data analysis—finding data after the fact to suggest a novel hypothesis that will falsely be portrayed as an a priori hypothesis. Publication bias is also known as “selective publication” or the “file drawer problem”; post hoc analysis is also known as “data dredging” or “HARKing” (Hypothesizing After the Results are Known).
An article in the Jan. 17 issue of the New England Journal of Medicine demonstrates the usefulness of a trial registry:
Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy
Erick H. Turner, M.D., Annette M. Matthews, M.D., Eftihia Linardatos, B.S., Robert A. Tell, L.C.S.W., and Robert Rosenthal, Ph.D.