Glucosamine and chondroitin, used separately or together, are among the more popular diet supplements. They are used widely for osteoarthritis, especially of the knee, and have been better studied than most other diet supplements. But do they really work?
The journal of my medical specialty, American Family Physician, recently published an article about the use of dietary supplements in osteoarthritis. They gave a “B” evidence rating to both glucosamine and chondroitin. This means there is inconsistent or limited-quality patient-oriented evidence. They recommended the use of glucosamine sulfate, saying, “Overall, the evidence supports the use of glucosamine sulfate for modestly reducing osteoarthritis symptoms and possibly slowing disease progression.” They did not exactly recommend chondroitin, although they said it “may provide modest benefit for some patients.”
I remain skeptical. And so does R. Barker Bausell, who devoted several pages of his book Snake Oil Science to an analysis of the research on glucosamine and chondroitin. (more…)
Part IV of the ongoing Homeopathy series will have to wait a day or two, because it is superceded by a recent, comment-worthy publication. Nevertheless, “H series” fans will find here a bit of grist for that mill, too.
An important role for this blog is to discuss problems of interpreting data from clinical studies. Academic medicine has committed itself, on the whole, to scientific rigor—to the extent that this is possible in messy, clinical (especially human) trials. Several tools have been proposed, and to a varying extent used, to enhance the rigor of clinical research and the reporting of clinical research. One of those tools is the registering of clinical trials prior to recruiting subjects. Registration would stipulate a trial’s a priori hypothesis(es), design, planned endpoints, and planned statistical methods, among other things. This would guard against several problems: publication bias—the tendency for some trials, usually “negative” ones, to go unreported; selective reporting of the results of a trial, if some are pleasing but others are not; and post hoc data analysis—finding data after the fact to suggest a novel hypothesis that will falsely be portrayed as an a priori hypothesis. Publication bias is also known as “selective publication” or the “file drawer problem”; post hoc analysis is also known as “data dredging” or “HARKing” (Hypothesizing After the Results are Known).
An article in the Jan. 17 issue of the New England Journal of Medicine demonstrates the usefulness of a trial registry:
Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy
Erick H. Turner, M.D., Annette M. Matthews, M.D., Eftihia Linardatos, B.S., Robert A. Tell, L.C.S.W., and Robert Rosenthal, Ph.D.
A few years back, my co-blogger Wally Sampson wrote a now infamous editorial entitled Why the National Center for Complementary and Alternative Medicine (NCCAM) Should Be Defunded. When I first read it, I must admit, I found it to be a bit harsh and–dare I say?–even close-minded. After all, plausibility aside, I believed at the time that the only way to demonstrate once and for all in a way that everyone would have to accept that many of these “alternative” therapies were no more effective than a placebo would be to do high-quality randomized clinical trials to test whether they worked, and NCCAM seemed to be the perfect funding agency to see that this occurred. Yes, this attitude in retrospect was quite naïve, as I have since learned the hard lesson over several years that no amount of studies will convince advocates of complimentary and alternative medicine (CAM) that their favored therapy doesn’t work, be it chelation therapy for autism or cardiovascular disease, homeopathy, reiki, or various other “energy” therapies that invoke manipulation of qi as a means of “healing,” such as acupuncture, but that is what I believed at the time.
Recently the Federal Trade Commission went after the makers of the Q-Ray Ionized Bracelet for their claims that their device was a cure for chronic pain. Last week Seventh Circuit judge Frank Easterbrook handed down his opinion on the company’s appeal, writing that the company was guilty of fraud and ordering them to pay 16 million dollars in fines. One of the key points for the company’s defense was that the Q-Ray Ionized Bracelet is legit because it exhibits the placebo effect. Judge Easterbrook was not impressed with this argument, writing:
“Like a sugar pill it alleviates symptoms even though there is no apparent medical reason. Since the placebo effect can be obtained from sugar pills, charging $200 for a device that is represented as a miracle cure but works no better than a dummy pill is a form of fraud.”
This decision creates an interesting precedent, since there are a large number of fanciful treatments that do not have any “apparent medical” mechanism and that are claimed by its proponents to work through a placebo effect. In my experience the placebo effect, briefly defined as a measurable response to an inert treatment, is almost completely misunderstood by the public – a fact that is exploited by purveyors of dubious treatments such as the Q-ray. Already in the comments of this blog there has been discussion over the nature of the placebo effect.
For my first blog entry, I wanted to write about something important, and I couldn’t think of anything more important than a recent book by R. Barker Bausell: Snake Oil Science: The Truth About Complementary and Alternative Medicine. If you want to understand how medical research works, if you want to know what can lead patients and scientists to false conclusions, if you have ever used complementary or alternative medicine or have wondered why others do, if you value evidence over belief, if you care about the truth, you will find a treasure trove of information in this book.
Some of the treatments encompassed under “complementary and alternative medicine” (CAM) have been around for a long time. Before we had science, “CAM” was medicine. Back then, all we had to rely on was testimonials and beliefs. And even today, for most people who believe CAM works, belief is enough. But at some level, the public has now recognized that science matters and people are looking for evidence to support those beliefs. Advocates claim that recent research validates CAM therapies. Does it really? Does the evidence show that any CAM therapy actually works better than placebos? R. Barker Bausell asks that question, does a compellingly thorough investigation, and comes up with a resounding “NO” for an answer.
Bausell is the ideal person to ask such a question. He is a research methodologist: he designs and analyzes research studies for a living. Not only that: he was intimately involved with acupuncture research for the National Center for Complementary and Alternative Medicine (NCCAM). So when he talks about what can go wrong in research and why much of the research on CAM is suspect, he is well worth listening to.
“Either homeopathy works or controlled trials don’t!”
—Scottish homeopath David Reilly at the 2001 Harvard Medical School Complementary and Integrative Medicine Conference.
Reilly based that assertion on his own series of four small studies of homeopathic treatments of hay fever, asthma, and allergic rhinitis, the outcomes of which had been inconsistent and largely subjective. (1) Later he explained that small-minded skeptics in “conventional medicine” assume “homeopathy doesn’t work because it can’t work,” a view echoed by conference host Dr. David Eisenberg, then the Director of the Center for Alternative Medicine Research and Education at Harvard Medical School (now of the Osher Center); these comments were met with appreciative laughter from the partisan audience. If such charges were valid, it would indeed be fortunate that Harvard Medical School, several other medical schools, and the National Center for Complementary and Alternative Medicine (NCCAM) are promoting homeopathy, both as a clinical method and as a topic worthy of research.