Articles

Homeopathy and the Selling of Nonspecific Effects

One of the core features of science (and therefore science-based medicine) is to precisely identify and control for variables, so that we know what, exactly, is exerting an effect. The classic example of this principle at work is the Hawthorne effect. The term refers to a series of studies performed between 1924 and 1932 at the Hawthorne Works. The studies examined whether or not workers would be more productive in different lighting conditions. So they increased the light levels, observed the workers, and found that their productivity increased. Then they lowered the light levels, observed the workers, and found that their productivity increased. No matter what they did, the workers improved their productivity relative to baseline. Eventually it was figured out that observing the workers caused them to work harder, no matter what was done to the lighting.

This “observer effect” – an artifact of the process of observation – is now part of standard study design (at least well-designed studies). In medical studies it is one of the many placebo effects that need to be controlled for, in order to properly isolate the variable of interest.

There are many non-specific effects – effects that result from the act of treating or evaluating patients rather than a physiological response to a specific treatment. In addition to observer effects, for example, there is also the “chearleader” effect from encouraging patients to perform better. There are training effects from retesting. And there are long-recognized non-specific therapeutic effects just from getting compassionate attention from a practitioner. It is a standard part of medical scientific reasoning that before we ascribe a specific effect to a particular intervention, that all non-specific effects are controlled for and eliminated.

Within the world of so-called “complementary and alternative medicine” (CAM), however, standard scientific reasoning it turned on its head. After failing to find specific physiological benefits for many treatments under the CAM umbrella, proponents are desperately trying to sell non-specific effects as if they are specific to their preferred modalities. In other contexts this might be considered fraud. It is certainly scientifically dubious to the point of dishonesty, in my opinion.

The latest example of this is a study published in the journal Rheumatology: Homeopathy has clinical benefits in rheumatoid arthritis patients that are attributable to the consultation process but not the homeopathic remedy: a randomized controlled clinical trial. The study compared 5 groups: the first three received a homeopathic consultation and either individualized, complex (meaning a standard preparation), or placebo treatment for rheumatoid arthritis; the last two received no consultation and either complex homeopathic treatment or placebo. The study was double-blind with respect to the treatments, but not blinded with respect to whether or not a subject received a homeopathic consultation.

The results:

Fifty-six completed treatment phase. No significant differences were observed for either primary outcome. There was no clear effect due to remedy type. Receiving a homeopathic consultation significantly improved DAS-28 [mean difference 0.623; 95% CI 0.1860, 1.060; P = 0.005; effect size (ES) 0.70], swollen joint count (mean difference 3.04; 95% CI 1.055, 5.030; P = 0.003; ES 0.83), current pain (mean difference 9.12; 95% CI 0.521, 17.718; P = 0.038; ES 0.48), weekly pain (mean difference 6.017; 95% CI 0.140, 11.894; P = 0.045; ES 0.30), weekly patient GA (mean difference 6.260; 95% CI 0.411, 12.169; P = 0.036; ES 0.31) and negative mood (mean difference − 4.497; 95% CI −8.071, −0.923; P = 0.015; ES 0.90).

In other words – there was no difference in any outcome measure among individualized homeopathic treatments, complex (standardized) treatment, or placebo. According to this study – homeopathy does not work for rheumatoid arthritis. That is really the only thing we can conclude from this study.

However, the authors spend a great deal of time trying to argue that the study supports the conclusion that, even though homeopathic products have zero effect, the homeopathic consultation process does have a beneficial effect. This is a flawed conclusion on several levels.

First, the study was not blinded for consultation vs no-consultation. Any comparison of this variable, therefore, is highly unreliable. It is likely no coincidence that the blinded comparisons were all negative, while some of the unblinded comparisons were positive. Also, even there the results are weak. The primary outcome measure was negative. Only the secondary outcome measures, which are mostly subjective, were positive.

And, perhaps most significantly, the study was not even designed to test the efficacy of the homeopathic consultation itself because it was compared essentially to no intervention (and in an unblinded fashion). There is therefore every reason to conclude that any perceived benefit from the consultation process is due to the nonspecific effects of the clinical interaction – attention from a practitioner, expectation of benefit, chearleader effect, etc.

If the authors wished to test whether there is something special about the homeopathic consultation then they should have compared it to attention from a health care provider that was controlled for time and personal attention, but did not contain elements specific to the homeopathic consultation. This study did not do that.

Conclusion

The study itself was reasonably well designed. It is a bit small, especially for the number of comparison groups, and so was underpowered. But it did make a reasonable and blinded comparison between homeopathic preparations and placebo and found no difference. This is in line with research into homeopathy in general - it is no different from placebo, which means homeopathy does not work.

It is worth pointing out that homeopaths have complained in the past that comparing standardized homeopathic complex to placebo is not fair, because homeopathy requires and individualized treatment. This, of course, contradicts the claims for any homeopathic product on the shelves. But that point aside, in this study individualized treatments performed no better, contradicting that complaint.

The authors, however, are presenting the study as evidence that the homeopathic consultation works, when this study was not designed to test that variable. The effects can easily be explained as the non-specific effects of therapeutic attention. The study provides no basis upon which to conclude that there is any value to a homeopathic consultation beyond the raw benefit of time and attention.

The study does reinforce what previous studies have shown – attention from a provider does have measurable benefits to quality of life and subjective symptoms (even if not disease-altering). This should prompt a re-evaluation of the priorities given to reimbursing for provider time. It should be noted that homeopaths generally charge directly for their time, and are not typically contracted to accept an insurance-based fee schedule. So comparing a typical homeopathic consultation to a busy physician’s office is not fair.

Further – if we are going to advocate for increased money for provider time and attention, such resources should go to providers who are science-based and use modalities that have efficacy on their own. It is highly misguided, wasteful, and potentially dangerous to rely upon practitioners who utilize an unscientific philosophy and treatments that are admittedly worthless, just because there are non-specific subjective benefits from the attention given.

Edzard Ernst makes the same points in an editorial that the journal Rheumatology ran in the same issue. While I question the decision to publish this study with the conclusion as written (peer-reviewers and editors should have  been much harsher on the hype and spin of the authors), at least they had the courage to include Ernst’s comments as well.

Posted in: Homeopathy

Leave a Comment (31) ↓

31 thoughts on “Homeopathy and the Selling of Nonspecific Effects

  1. sheldon101 says:

    Not Registered at clinicaltrials.gov

    I thought that clinical trials needed to be registered these days at clinicaltrials.gov these days. clinicaltrials.gov started in 2000. Here’s a good writeup from 2005 http://www.lhncbc.nlm.nih.gov/lhc/docs/reports/2005/tr2005003.pdf .

    The Rheumatism study which started in 2006 isn’t registered under the name of the first or last author.

    However, both the first and last authors are in the database for a study registered in 2006 that’s still recruiting (really?) and that got funding from a drug company
    Trial Evaluating Devil’s Claw for the Treatment of Hip and Knee Osteoarthritis
    http://clinicaltrials.gov/ct2/show/NCT00295490?term=lewith&rank=1

    The study has 7 secondary endpoints. And it should have been completed in 2008.

    Anybody want to email the authors?

  2. “The study does reinforce what previous studies have shown – attention from a provider does have measurable benefits to quality of life and subjective symptoms (even if not disease-altering).”

    I’m sorry, I don’t have any training in statistics, but does this study definitely show some measurable benefit? In my reading, one could say that patients who have formed a relationship with a practitioner are more likely to overestimate the benefits in their self-reporting (perhaps out of a sense of obligation).

    Not that I’m against doctors spending more time with patients. I’m only saying that when a doctor has spent a good amount of time finding some way to help your symptoms, there is some social pressure to tell them what they want to hear and say you feel better. Could we be seeing that in these results?

  3. Mojo says:

    However, the authors spend a great deal of time trying to argue that the study supports the conclusion that, even though homeopathic products have zero effect, the homeopathic consultation process does have a beneficial effect.

    In fact, there is evidence from a systematic review published earlier this year that the homoeopathic consultation process does not produce any greater beneficial effect than is produced by the consultation process involved with conventional treatments. See Nuhn T, Lüdtke R, Geraedts M. Placebo effect sizes in homeopathic compared to conventional drugs – a systematic review of randomised controlled trials. ,Homeopathy. 2010 Jan;99(1):76-82.

    This paper concluded that “[p]lacebo effects in RCTs on classical homeopathy did not appear to be larger than placebo effects in conventional medicine.”

    Note that this is a review of trials of “classical”, or individualized homoeopathy, so they will have involved precisely the type of consultation that is now being credited with a specific beneficial effect.

  4. Mojo – good point. I have heard CAM proponents misuse that study also to argue that homeopathic effects are not due to placebo.

    They definitely play by the “heads I win, tails you lose” game rules.

  5. splicer says:

    @sheldon101

    This trial was not in the U.S. and registry would not be required.

    From your link.
    “The FDA Modernization Act, Section 113 (FDAMA 113) requires that clinical trials for
    effectiveness conducted under an investigational new drug application (IND) for “serious or life
    threatening diseases” must be registered in ClinicalTrials.gov.”
    and
    “The concept of “compliance” can be considered for those trials that fall under the scope of
    FDAMA 113″

  6. JMB says:

    In the ancient past, if a placebo effect was prescribed for an individual patient, that fact was documented in the medical record. Given the development of patients’ rights to access the medical record, most providers would avoid prescribing placebo. Now integrative medicine has solved that problem. Instead of documenting that they have prescribed a placebo, the integrative medicine provider can record that they have provided a homeopathic consultation, and link to the justification from an RCT published in Rheumatology. Then they have successfully obscured the fact from the patient that they have been treated with a placebo effect (unless the patient follows this SBM blog site, or reads Ernst’s editorial).

    I would guess that the homeopathic consultation would have one advantage over a conventional medicine consultation, less documentation is required in the medical record. The provider could even argue that any documentation time would detract from face time with the patient (and face time is critical for the desired effect).

  7. JMB says:

    @splicer

    Who says rheumatoid arthritis is not a serious or life threatening disease?

  8. Gregory Goldmacher says:

    @ micheleinmichigan

    What you are talking about is sometimes called a compliance effect, and is definitely part of the non-specific effects being discussed.

    Others include expectation bias (you expect to feel better, so you perceive symptoms as less severe), the “true” placebo effect (actual physiological changes based on expectation of relief), ascertainment bias (the person measuring outcome knows who got treated and downgrades their symptoms), concurrent care effects (being part of a clinical trial makes you more diligent about taking your meds, good diet, and other lifestyle matters), etc.

  9. Mojo says:

    @JMB

    Now integrative medicine has solved that problem. Instead of documenting that they have prescribed a placebo, the integrative medicine provider can record that they have provided a homeopathic consultation, and link to the justification from an RCT published in Rheumatology.

    Couldn’t they just record that they have prescribed a homoeopathic remedy? Anyone with a reasonable understanding of the research would be able to identify that as a placebo.

  10. WilliamLawrenceUtridge says:

    Effin’ really guys? There’s only so much Dullman I can take in a week. Couldn’t we alternate between homeopathy and vaccination so the comments can alternate between debunking Thing vs. Dullman? Sigh.

    Anyway, Dullman’s going to be screeching that the low number of subjects in each group precludes a definite conclusion, oblivious to the fact that I’ve been pointing out all his “positive” studies have this exact same problem. That is my prediction, let’s see if I’m right.

    It’s a shame (and somewhat telling) the authors didn’t take the study to it’s ultimate conclusion and have a conventional, but lengthy and detailed consultation with a medical practitioner. But we’re already expending precious research resources chasing down myths and nonsense, so I can’t be too disappointed. Not that this research group is likely to produce much of merit anyway based on what they’re choosing to study and how they spin it.

  11. Badly Shaved Monkey says:

    I like the fact that Dr Novella started this piece with mention of the Hawthorne Effect, because it raises an important corollary that I don’t think has been addressed yet- how long-term is any improvement likely to be?

    Certainly, before according any special effect from a homeopathic consultation rather than a bog-standard consultation then this should have been included as another group, though, as Mojo points out, there is evidence to say that the homeopathic consultation is not very special. However, what is the real significance of any improvement resulting from any non-specific aspects of the intervention?

    Presumably in the original Hawthorne experiments, the workers settled back to some sort of pre-intervention baseline after each experiment. The effects could not possibly be cumulative or it would have ended with their hands working at relativistic speeds and the factory would have eventually imploded into a black hole.

    No, the Hawthorne Effect essentially must describe a temporary amelioration of a basal state.

    This is the point that is missed by the Reformed Church of Homeopathy whose adherents, like Lewith and Peter Fisher, get very close to saying it’s the consultation not the pills that work.

    In other words, I would strongly suspect that the placebo effect’s duration is limited to the duration of the study. Without getting all quantum, we only know there is a placebo effect when we actively look for it, but by looking for it we create it.

    What would be needed is for the participants to be approached after the advertised end of the study and be asked how they are doing in a manner that avoids implying it is a continuation of the original study, or uses an objective measure of their disease that is obtained during and after the study period without the patients being aware of its importance.

    In the real world, this property would be represented by the tendency of patients to drift from SCAMster to SCAMster or back to their conventional physician disillusioned by the SCAM de jour once its magic has worn off. Successful SCAMsters manage to keep the story changing to lead the patient onwards- homeopathy is a system of excuses masquerading as a system of medicine and is well-suited to this. But, eventually for many patients the hard physical reality of chronic fluctuating incurable diseases simply comes back to bite them on the bum. SCAMsters lose track of these patients and add them to their lists of ‘successes’. The real doctor as the final common pathway of all chronic disease sees them returning as ‘heart-sink’ patients yet again and counts them as failures.

  12. Mojo says:

    In other words, I would strongly suspect that the placebo effect’s duration is limited to the duration of the study.

    Successful SCAMsters manage to keep the story changing to lead the patient onwards- homeopathy is a system of excuses masquerading as a system of medicine and is well-suited to this. But, eventually for many patients the hard physical reality of chronic fluctuating incurable diseases simply comes back to bite them on the bum.

    Of course, that would contradict all those homoeopathic claims that, especially in the case of chronic diseases, they do not merely “suppress” the symptoms on a temporary basis, but provide lasting cures.

  13. BKsea says:

    What this study seems to say is that homeopaths should just be giving their patients sugar pills…

  14. pmoran says:

    Adding to the many questions already raised by Steve’s piece:

     can placebo and other non-specifc influences be stronger when under more favourable conditions than usually apply within clinical trials?

     will certain constraints upon mainstream medical practice prevent it from ever fully exploiting them?

    There is nothing settled about this aspect of medical science.

  15. Doc says:

    Steve Novella, you are bring up the “heads I win, tails you loose” attitude but fail to realize that’s its exactly what so-called “skeptics” use against their “mortal” foes.

    Here is a paper on exactly how so-called “skeptics” apply that principle.

    http://www.sheldrake.org/D&C/controversies/Carter_Wiseman.pdf

    I can appreciate that you are not a real scientist and probably just want to climb the ladder politically at Yale but I also wonder if you have connections to big pharma. Of course you won’t admit that but frankly most see you that way.

  16. Chris says:

    I believe Dr. Novella is familiar with Sheldrake.

    Also the Pharma Shill Gambit is old tired, and quite boring. Could you try using something more original, like actual data and evidence? That would really work on skeptics. Trust me!

  17. BillyJoe says:

    Only one criticism.
    Why use “utilise” when you can simply use “use”.
    (Sorry, pet hate)

  18. WilliamLawrenceUtridge says:

    @Doc

    You’re comparing medicine with parapsychology, possibly the only other “scientific” field with less credibility than complimentary and alternative medicine?

    Dr. Novella would doubtless be convinced by replicated evidence that converges on a finding, even for something as improbable as homeopathy. Decades of parapsychology (and CAM) research have never produced a conclusive set of positive findings – only an endless set of equivocal studies that fail replication and demands for “more research”. There is never a willingness to accept the fact that the phenomenon under investigation is actually non-existent. There is also never a willingness to admit just how little basis there is for any of these beliefs in the hundreds of years of research on physics, biology and chemistry. So if skeptics of all stripes regarding a variety of different and improbable subjects are unwilling to accept marginal phenomenon that lack solid replication, that’s not an unreasonable conclusion.

    Dr. Novella, like all people interested in medicine, wants money to fund effective treatments and meaningful research. Marginal results for the improbable treatments of homeopathy, acupuncture and the like simply don’t meet those criteria.

  19. You can find a bit more detail about this paper at http://www.dcscience.net/?p=3695

  20. I appreciate every example of how the placebo effect is more than just belief-powered symptom reduction. It cannot be spelled out enough.

  21. pmoran says:

    Mojo: In fact, there is evidence from a systematic review published earlier this year that the homoeopathic consultation process does not produce any greater beneficial effect than is produced by the consultation process involved with conventional treatments. See Nuhn T, Lüdtke R, Geraedts M. Placebo effect sizes in homeopathic compared to conventional drugs – a systematic review of randomised controlled trials. ,Homeopathy. 2010 Jan;99(1):76-82.

    This paper concluded that “[p]lacebo effects in RCTs on classical homeopathy did not appear to be larger than placebo effects in conventional medicine.”

    Hmmmh! This is the unexpected finding for me. Has anyone looked closely at this study?

    It contradicts many studies, including the present one, suggesting that any kind of extra attention favours more positive results within clinical trials. Is this not why we insist upon meticulously designed controls?

    Also, such studies are not a true reflection of placebo potential. Depending upon how the study was set up, and the conditions being chosen to treat, true placebo effects in the “placebo” arm may be being swamped by spontaneous improvements in symptoms and reporting biases, and partially suppressed by patient uncertainty as to whether they are “supposed” to get better or not.

    There is a will-o-the-wisp of truth somewhere in all this, but we have yet to grasp it. Our present tools are not quite up to the task.

  22. @ Gregory Goldmacher – Thanks so much for the run down on biases and effects that make up the placebo effect. That was a very helpful clarification. It’s also good to know the specific name for each effect.

  23. JMB says:

    @Mojo

    Anyone with a reasonable understanding of the research would be able to identify that as a placebo.

    I think the placebo treatment was more commonly used in “MD” medicine in the 50′s and 60′s. I don’t know if there any pharmacists following this discussion that could confirm that prescriptions for sugar pills were more common in this time period or not. That is just my impression from working with doctors who were practicing at that time. When I trained in the late 70′s, I was under the impression that using placebo treatment was risky to the doctor’s license, so documentation was fairly extensive.

    I wouldn’t object so much to integrative medicine and EBM so much if they would just admit it is placebo effect. I also wouldn’t object to it so much if the cost of homeopathic remedies equaled the cost of sugar pills. Finally, integrative medicine ought to admit we should avoid those placebo treatments that make it difficult for doctors to keep a straight face (energy healing, Reiki ).

    Somehow, I learned some guidelines for use, and how to maximize the effect (don’t worry, I don’t treat patients, so I won’t be using placebo effects). For example:

    1) make sure the illness is self limited and short duration
    2) make sure to show excitement about how well the placebo should work, and use medical speak to describe it
    3) wear your white coat with your stethoscope draped around your shoulders or neatly folded in your front pocket
    4) an intramuscular injection of saline will elicit a more reliable placebo response than a sugar pill, and a small or large pill is better than an average sized pill
    5) have colleagues follow you in so that they can witness the instructions to the patient, and the administration of the first dose
    6) administer the first dose there in the room, and wait with baited breath to witness it’s effect (of course, this would also identify those patients who would not respond to placebo, so a different plan would have to be implemented)
    Note: #5 & #6 are an amplification of the Hawthorne effect

    Now, I don’t advocate the return to the era of paternalistic medicine, when even dishonesty was acceptable if the desired effect was achieved (the ends justified the means). I also don’t think that it is worth taxpayers dollars to run RCTs on effects that have been known for a long time.

    I guess if the training of physicians is dumbed down, then there wouldn’t be the recognition that it is unethical (? integrative medicine = dumbed down medicine? answer, yes, if medical students are willing to spend $2500 for a special course in CAM).

    A few other corollaries about placebo effect that I think have been forgotten,

    1) you can still observe placebo effect in patients even if you tell them it is placebo before administration (you will reduce the percentage of patients that will report improvement, and the number of patients willing to try it, some will accuse you of implying that their complaints are all in their head)

    2) you often observe placebo effect before they medication has had time to circulate to the target organ (how many patients fall asleep before IV sedation has time to circulate to the brain, or report improvement in pain before a pain pill could be absorbed in the GI tract)

    3) fully trained physicians are susceptible to the placebo effect

    4) selection of the disease process and the patient can improve the likelihood of placebo success (the greater the emotional component in the disease process or illness, the greater the chance for effect)

    Others here might add to the corollaries on placebo effect.

  24. Composer99 says:

    Doc with a standard sCAM canard:

    I can appreciate that you are not a real scientist and probably just want to climb the ladder politically at Yale but I also wonder if you have connections to big pharma. Of course you won’t admit that but frankly most see you that way.

    Can’t beat ‘em with evidence, so rely on snide hogwash, argumentatum ad hominem, and weasel words.

  25. @Doc

    If you think that Novella is not a “real scientist”, then perhaps I qualify?
    http://www.ucl.ac.uk/Pharmacology/dc.html

    My evaluation of the Brien et al paper was somewhat less generous than Novella’s. See http://www.dcscience.net/?p=3695

    There was no detectable effect of consultation vs no consultation on the primary outcome, and for the secondary outcome the effect was small (on the borderline of clinical significance) and it was inconsistent.

  26. Draal says:

    “Further – if we are going to advocate for increased money for provider time and attention, such resources should go to providers who are science-based and use modalities that have efficacy on their own.”

    Should then hospitals be allowed to charge additionally for the time a nurse spends with a patient? (I can imagine using RFIDs to track the time a nurse spends in a patient’s room.)

    There’s also potential for fraud, such as falsely billing insurance companies for extra time akin to doctor’s who order more expensive and/or unnecessary tests than the average.

    I’ve seen that a PCP visit can either be short or very long, but the bill for the visit is the same. Should then shorter visits be less expensive? I’d love to spend $25 for 5 minute sinus infection diagnosis and a script than the standard $60/visit which covers up to a half hour.

  27. Dr Benway says:

    There’s also potential for fraud, such as falsely billing insurance companies for extra time akin to doctor’s who order more expensive and/or unnecessary tests than the average.

    Although billing for time rather than procedures does put an upper limit upon the potential for fraud, given that a day has no more than 24 hours in it.

    I must be slow. It takes me about 5 min just to fill out a prescription, read it to the patient, hand it over to them, and then record it under the “plan” part of my SOAP note in the chart. If I have to look up interactions with other meds the patient might be taking, that slows me down even more.

  28. Draal says:

    “Although billing for time rather than procedures does put an upper limit upon the potential for fraud, given that a day has no more than 24 hours in it.”
    True. But criminals always find creative solutions.

    “It takes me about 5 min just to fill out a prescription”

    All my docs use computers. They pull up my chart, input the name and dosage and select electronic submit to the pharmacy (or print it out for signature). The script is eventually signed at some point but I don’t see it. The computers probably pull up the interactions immediately.

    This is the 2010s, man!

  29. pmoran says:

    Interesting thing — placebo performed much better than the homeopathic preparation in this study (P = 0.008; Effect Size = 0.52). This increases the likelihood that the other positive findings are due to all the noises within the clinical trial process.

    It does not change my point that such studies cannot fully demonstrate or refute the existence of clinically significant “true” placebo reactions.

    Every clinician will know how simple reassurance is enough to terminate some bothersome complaints.

  30. Badly Shaved Monkey says:

    “Dana” really does seem to have disappeared since being faced with a flurry of tricky questions.

    Let’s try a little magic to see whether we can get him back.

    “Dana Ullman”
    “Dana Ullman”
    DANA ULLMAN

  31. Badly Shaved Monkey says:

    Has it worked?

Comments are closed.