Articles

The Rise of Placebo Medicine

It is my contention that terms such as “complementary and alternative medicine” and “integrative medicine” exist for two primary purposes. The first is marketing – they are an attempt at rebranding methods that do not meet the usual standards of unqualified “medicine”. The second is a very deliberate and often calculating attempt at creating a double standard.

We already have a standard of care within medicine, and although its application is imperfect its principles are clear – the best available scientific evidence should be used to determine that medical interventions meet a minimum standard of safety and effectiveness. Regulations have largely (although also imperfectly) reflected that principle, as have academia, publishing standards, professional organizations, licensing boards, and product regulation.

With the creation of the new brand of medicine (CAM and integrative) came the opportunity to change the rules of science and medicine to create an alternative standard, one tailor made for those modalities that do not meet existing scientific and even ethical standards for medicine. This manifests in many ways – the NCCAM was created so that these modalities would have an alternate standard for garnering federal dollars for research. Many states now have “health care freedom laws” which create a separate standard of care (actually an elimination of the standard of care) for self-proclaimed “alternative” practices.

But perhaps the most insidious and damaging double-standard that is being advocated under the banner of CAM is a separate standard of scientific research itself. The normal rules of research that have evolved over the last few centuries are being subtly altered or discarded, with clever newspeak. It is a way for proponents to choose their evidence, rather than having the evidence decide what works and what does not work. We saw this strategy at play with the recent acupuncture study for back pain that clearly showed acupuncture was no more effective than placebo acupuncture. Proponents (propagated by an uncritical media) turned scientific logic on its head by interpreting this result as indicating that placebo acupuncture must work also (if only we could figure out how, they unconvincingly mused).

We see this strategy at work also with the use of so-called “pragmatic” studies – a rebranding of “unblinded” studies. This is a way to choose their evidence – in this case, poorly controlled unblinded studies that are more likely to reflect the bias of the researchers and therefore give them a result that they like. This is their reaction to well-designed placebo controlled trials that show their preferred modality does not work.

Another strategy is to change the meaning of the concept of placebo effects. This one was ready-made, and most people grossly misunderstand the nature of “the” placebo effect. One of my first articles for SBM was about the placebo effect because this concept is so criticial to science-based medicine. To summarize – the placebo effect is really many effects. It is everything other than a physiological response to the treatment. It is not all a real effect of mind-over-matter – it includes every bias and artifact of observation as well. It includes things like subjects reporting they feel better to the researcher because they want the treatment to work and they want to please the authority figure, who also wants the treatment to work and may be encouraging the perception of benefit.

It is most important to understand how the term “placebo effect” is used in the context of a controlled clinical trial. Scientific methodology is about controlling variables – because we want to know which variables work and which ones do not. In any clinical scenario there are a multitude of variables that may affect the outcome or the perception of the outcome. Therefore a well-designed study maximally controls all the variables – ideally so that the one variable of interest (the treatment) is completely isolated. This is accomplished in a number of ways. One method is randomization – randomly assigning subjects to the various treatment and placebo arms of a clinical trial. Randomization combined with sufficiently large trial size (number of subjects) results in all variables not specifically controlled for averaging out among the various arms. Another way to look at is that randomization prevents systematic biases in who gets treated and who gets a placebo from affecting the results.

Another method of controlling variables is the double-blind placebo control. Ideally one group of subjects will receive the treatment being studied while another group will receive a treatment that is identical in every way except that it is inert (i.e. it controls for all possible variables and isolated the one variable of interest – the treatment). Both the subject and the examiner are blinded to which is which to control for psychological effects. In order to conclude that the treatment “works” those subject receiving the active treatment must do statistically significantly better than those receiving the placebo. If the activity of the treatment was the only variable, then we can confidently conclude it was responsible for the improvement.

I know this is all very basic, but it is these very basic concepts that are being challenged by proponents of so-called CAM. They are trying to say the the effect measured in the placebo arm of such studies is a real effect, something valuable and alone is sufficient to justify the treatment. This philosophy has been termed by critics “placebo medicine” and is just the latest attempt at creating a double standard. But the claim is utterly ignorant of the scientific nature of the placebo effect. It is a method of controlling for biases, artifacts, and variables (known and unknown) – it is not a real effect.

There may be some non-specific therapeutic effects mixed into placebo effects. For example, people who are being studied tend to take better care of themselves and are more compliant with treatments (because they are being watched). They may also feel better as a result of the positive attention from a health care provider – old-fashioned good bedside manner. These are some of the variables being controlled for. But it is scientifically absurd to argue that they justify an ineffective treatment. But that is exactly what CAM proponents are doing.

The latest manifestation of this strategy is a report put out in the UK by The Kings Fund – a health policy charity. They put together a committee to examine how the UK can find evidence to support CAM therapies. They are not interested in figuring out “if” such treatments work, but rather how they can show “that” they work. They report:

Explaining the need for different types of research when assessing complementary practice, Professor Dame Carol Black said: ‘It has become widely accepted that a stronger evidence base is needed if we are to reach a better understanding of complementary practices and ensure greater confidence in their clinical and cost effectiveness. The challenge is to develop methods of research that allow us to assess the value of an approach that seeks to integrate the physical intervention, the personal context in which it is given, and non-specific effects that together comprise a particular therapy.’

Got that? We need new kinds of research (read “double standard”) in order to demonstrate the value of these special CAM practices. The reason that we need to find new ways to demonstrate their value is because they fail under the accepted scientific methods. The last sentence is just a fancy way of saying that placebo effects should count as real effects.

It further says:

‘As long as findings from research can provide confidence in the positive effect of the physical intervention at the heart of the treatment, then any added benefit brought by the therapeutic relationship and the context for treatment should count as part of the treatment effect,’ the report says.

‘For complementary therapies such a holistic approach to effectiveness should be adopted by bodies such as NICE, when comparing cost-effectiveness across a range of treatments.’

The “physical intervention at the heart of the treatment” is functionally the same thing as – non-specific placebo effects. They want to take a “holistic” approach to evidence (another useful marketing brand), meaning they get to decide what the evidence means. George Orwell would be proud.

As usual, Edzard Ernst (the go to expert for the media) gets it exactly right. He is quoted as saying:

‘This is the introduction of double standards through the back door.’

‘In this case we might as well allow an ineffective medication on the market, because it too will have a placebo effect.’

That latter point is a favorite of mine as well. Whenever CAM proponents try to change the rules of science to suit their needs, I invite my readers to imagine a pharmaceutical company getting away with the same thing (not that they wouldn’t want to if they could). Imagine a drug that works no better than placebo in well-designed clinical trials and the company trying to get FDA approval on the grounds that their drug has a valuable placebo effect, even if it is physiologically worthless.

Conclusion

The integrity of science-based medicine is critical to the health of the public, the legitimacy of modern medicine, and also the economic health of modern society (as is being forcefully argued recently). We need to have one scientific standard that is fair, rational, and scientifically sound. The creation of a double standard for proponents of modalities that do not meet the very reasonable standards of scientific medicine is eroding the standard of care and the integrity of modern medicine.

The public, the media, politicians, and regulators should not fall for the deceptive language that is being used to disguise the truth of these efforts to undermine science-based medicine. This is a brass attempt at changing the rules of science to meet their perceived needs. When you change the rules of science you no longer have science – you have pseudoscience or something even more nefarious.

Posted in: Politics and Regulation, Science and Medicine

Leave a Comment (49) ↓

49 thoughts on “The Rise of Placebo Medicine

  1. David Gorski says:

    Of course, the double standard doesn’t just apply to CAM. As I pointed out in my post about vertebroplasty, “allopaths” are all too often (and distressingly) willing to make the same sorts of arguments beloved of CAM advocate when arguing for “conventional” treatments when scientific studies and clinical trials show them to be no better than placebo.

  2. Michelle B says:

    The Camsters are embracing that once derogatory claim thrown at them–placebo–and making it their own. Rather clever.

    However, I still know many users of non-science-based medicine (that is, bad medicine) that are still very insulted and become indignant when you state that their beloved ‘therapies’ are mostly placebo. They haven’t gotten with the program. They should thank me for the compliment.

    Great article, I learned a lot.

  3. DevoutCatalyst says:

    “…and they want to please the authority figure…”

    You must kiss the ass of your practitioner or face shunning. This whole CAM universe seems so pathetic, so carnival midway. What kind of human sets aside their own full adult potential to instead engage in some silly parlour game like homeopathy? The motivation to suck money out of people’s pockets I understand, it’s the stunting of another human’s intellectual growth — that of the patient — is what gets my goat.

  4. twaza says:

    Steven

    You identified a central issue, the rebranding of pragmatic trials.

    The role of a pragmatic trial used to be to assess if a treatment which had a specific effect shown in properly controlled RCTs still had that effect when used in real life, without the extra support provided by the research team and filtering provided by the trial’s inclusion criteria. This sort of pragmatic trial provides evidence that the treatment is still effective in every day practice.

    Now it seems the main use of a pragmatic trial is to assess treatments such as acupuncture and chiropractic that have been shown in properly controlled trials to have no important specific effect. This sort of pragmatic trial can only show the size of the placebo and bias effects. (I think you would include in placebo effects biases such as “see what you want to see”, and “say what you think they want you to say”. I would exclude them from placebo effects because they are not effects you would want to exploit.)

    There is another issue that is seldom noticed in these discussions. Science-based medicine has paid little attention to understanding the components of the placebo effect, and provides little guidance on how evidence-based practitioners can exploit these effects in their practice. Until it does this, CAMsters will have a competitive advantage.

  5. Regarding bias effects vs placebo effects – the problem with that distinction is that the overwhelming majority of clinical trials are not designed to separate them out as distinct variables. They are all lumped together as “the” placebo effect.

    It is useful to separate out those parts of the placebo effect that result from the therapeutic relationship, as opposed to the mechanics of clinical trials. And also to see how to maximize any useful component of placebo effects.

    But generally this falls under good medical practice and the psychology of medicine. This includes good communication skills, having a positive attitude without giving false hope, validating patient’s concerns, considering compliance and taking an overall biopsychosocial approach to treating patients.

    We don’t need to measure these effects for each and every intervention – and they certainly cannot and should not be used to justify any specific intervention.

    The goal of ethical and scientific medicine should be to provide interventions which are actually safe and work in a way that maximizes their utility and any non-specific therapeutic effects of the care given.

  6. jmm says:

    There are obviously many components to “the” placebo effect. Is it a scientifically studied and settled question whether there indeed no significant component of mind over matter, as you claim? Specifically in those cases, eg pain and certain psychiatric conditions, which show both extremely large placebo effects and for which “anticipation” has plausible physiological methods of action.

    I would be very grateful if you could point me to the primary evidence answering this question. If anticipation is a significant component, and so parts of the placebo effect are therefore “real”, then this does raise many questions regarding effective treatment.

  7. I did not say there is no significant contribution from mind-over-matter effects. I said we cannot assume that this is a significant portion of the effect.

    This is a complex question, as it is different for each symptom, endpoint. Pain is generally the most subject to such effects. The measured effect is about 30%, and this may be largely due to actual decrease in pain from purely psychological effects.
    Death from cancer is at the other end of the spectrum, and there is probably little or no contribution from real mental effects.

    To clarify – my point was that most studies are not even designed to separate out these various components of placebo effects and it is not reasonable to assume that mental effects are dominant.

  8. trrll says:

    That the magnitude of the “power-of-suggestion” placebo effect is a matter of debate. Meta-analyses that have looked at studies that had both placebo and no-treatment arms found very little evidence of placebo effects for treatment of conditions other than pain. See, e.g.

    Hróbjartsson, A., and Gøtzsche, P.C. (2003) Placebo treatment versus no treatment. Cochrane Database Syst Rev : CD003974.

  9. twaza says:

    Steven

    Thanks for the clear reply. I completely agree that good medical practice includes “good communication skills, having a positive attitude without giving false hope, validating patient’s concerns, considering compliance and taking an overall biopsychosocial approach to treating patients”.

    However, is there evidence on how often and how well “good medical practice” is practised? I guess it would be hard to do the studies that would give meaningful answers to my question. (Most patient satisfaction surveys are pretty meaningless.)

    I suspect that the reason many people turn to CAM is because they feel that the medical practice they have experienced isn’t that good.

    My point is that the problem of CAM has a supply side and a demand side. We should try to reduce the supply of CAM and, simultaneously reduce the demand for it.

    jmm

    placebo effects are complex and varied so it is not possible to give one or two pointers to the primary evidence. There is a famous (or infamous, depending on which side you are on) systematic review that asked “Is the placebo powerless?”, and concluded that the answer is Yes. Here is the link to the Medline record and abstract: http://www.ncbi.nlm.nih.gov/pubmed/15257721.

    An excellent book with a comprehensive and thoughtful discussion of the primary evidence is Daniel Moerman’s Meaning, Medicine and the ‘Placebo Effect’.

  10. twaza says:

    Steven

    There are people who would disagree that “Death from cancer is at the other end of the spectrum, and there is probably little or no contribution from real mental effects”.

    See Applying Evidence to Support Ethical Decisions: Is the Placebo Really Powerless? Franz Porzsolt et al. Science and Engineering Ethics (2004) 10, 119-132.

    I don’t know enough to make my mind up yet about this.

  11. David Gorski says:

    Steven is correct; there have been recent studies that have found no difference in survival depending upon attitude, psychotherapy, etc. That’s not to say that having a good attitude or perhaps undergoing psychotherapy doesn’t help cancer patients cope with their symptoms, but it does not prolong their survival.

  12. daedalus2u says:

    jmm, I have written about what I think is the physiology behind the physiological placebo effect. I see it relating to the normal neurogenic allocation of resources between long term processes such as healing and more immediate needs such as running from a bear.

    http://daedalus2u.blogspot.com/2007/04/placebo-and-nocebo-effects.html

    I have a pretty extreme view of what I consider to be the physiological placebo effect, but I don’t disagree with anything that Dr Novella has said in this piece, and I don’t think that the physiological placebo effect can be characterized as “mind over matter”.

    I think the term “mind over matter” is unfortunate. Is the normal neurogenic control of the movement of voluntary muscles an instance of “mind over matter”? When I move my arm am I doing “mind over matter”? When I practice stress reduction and my blood pressure goes down, is this “mind over matter”? When my anatomic nervous system controls my gut to digest food, am I doing “mind over matter”?

    Actual healing can’t occur in very short periods of time. For tissues to repair themselves they need to synthesize proteins, and perform other metabolic functions that require nutrients and take time. Perceived changes that occur essentially instantaneously are (virtually certainly) due to changes in perceptions and not due to physical changes in the health of tissue compartments. A 5 minute acute treatment that makes a lower back feel better has not increased bone density and resolved osteoporosis during that time. A 5 minute treatment that did substantially increase bone density would be extraordinary, would be “mind over matter”, and would violate conservation of mass, action at a distance and a number of other physical principles that are well established. There is no evidence that such things have ever happened.

    We know that there is neurogenic control of many aspects of body movement and of body physiology. Most every organ and tissue compartment has nerves going to and from it. Exactly what those nerves are doing or not doing is not well understood. Presumably they are doing something or they would not have persisted there over evolutionary time.

    Healing is local and is mediated by individual cells dividing and producing what ever it is they need to produce healing in that tissue compartment. Cells are not controlled individually by the nervous system. Individual muscle cells are controlled by individual nerve cells, but only in assemblies of large numbers, whole muscles are activated not individual muscle cells. Most immune cells are free floating and are not attached to the nervous system, so any nervous system effects on them can’t be individualized but must be global. When organs are transplanted all nerves running to and from them are severed. They can grow back to some extent, but for the most part the organs function without any nervous system input.

    I don’t think that “anticipation” plays that much of a role in the physiological placebo effect. As I understand it, anticipation would only have effects for a short time, a few minutes or tens of minutes. Significant healing can’t occur during such a short period of time. Any enhanced healing state that the placebo treatment invokes must persist for the duration of the healing experience for that healing experience to be enhanced.

    If there is a physiological placebo effect (I think there is), that physiology can be studied just like any other aspect of physiology can be studied (I think it should be). I think the problem that many researchers have with studying the physiological placebo effect doesn’t have to do with science, but rather with politics and the lumping of all things that cause placebo treatments (treatments without an active pharmacological or surgical basis) to have perceived positive health effects including bias and error into a “placebo effect”.

    Studying bias and error to try and increase the bias and error in patients so that patients will have a more positive health treatment experience is completely wrong, but that is the approach that CAM takes.

    Regarding cancer, I don’t think that what I consider to be the physiological placebo effect would have positive effects on cancer, or rather I can’t think of a mechanism by which it would. For many other conditions it would have positive effects but probably not cancer. Lumping different conditions together to study the physiological placebo effect is a poor way to study it because for some the physiological placebo effect may have no positive effects (it can even have adverse effects as with the nausea example in my blog post).

  13. “My point is that the problem of CAM has a supply side and a demand side. We should try to reduce the supply of CAM and, simultaneously reduce the demand for it.”

    Demand drives supply, and space abhors a vacuum

    Attacking supply didn’t work for prohibition and it isn’t working for the war on drugs.

    The more you reduce supply, the more compelling it is for both new suppliers to enter the market and for existing suppliers to remain.

    If you can find a way to eliminate or drastically reduce demand, suppliers will leave the market for more profitable ventures or wither on the vine and fail.

  14. Pliny-the-in-Between says:

    This is must come to a head when discussing national reform efforts such as the medical home. Anyone who receives payment for primary care services, for example, should be held to an identical standard. And it should be the highest we have. To do otherwise is unfair and dangerous.

  15. daijiyobu says:

    I get a naturopathic journal [pauses until the crowd stops giggling] and here’s a quote from President Bernhardt of CCNM, the ND school in Toronto:

    “note that we are studying naturopathic treatment [...] because the studies [...] randomized clinical trails [...were] focused on whole treatment they could not be conducted in a double- or even a single-blind fashion, but they were conducted with rigorous methodology” [NDNR 2009-08, p.24].

    Hmmmm.

    Sounds like a riddle: what’s rigorous while uncontrolled, what offers evidence of support but we’re not quite sure what part of what we studied did what we think happened?

    -r.c.

  16. qetzal says:

    Imagine a drug that works no better than placebo in well-designed clinical trials and the company trying to get FDA approval on the grounds that their drug has a valuable placebo effect, even if it is physiologically worthless.

    Quite a few years ago, I worked for a biotech that was trying to develop a very novel human therapeutic. The product advanced to Phase II trials in peripheral arterial disease, which is another condition with a very high placebo effect. (Probably that’s because the standard endpoint for treatment is how far the patient can walk before they decide it hurts too much.)

    The Phase II was a double blind, randomized comparison between the active treatment and “vehicle,” where the latter consisted of all the formulating agents without the (postulated) active ingredient. At the end of the study, there was no difference between the active and vehicle groups. However, both groups showed relatively large increases in walking time compared to baseline.

    You can probably guess where this is going. The management decided the apparent increase from baseline was ‘too large’ to be placebo effect, and concluded that one of the main ingredients in the vehicle must have been responsible for the observed improvement. They then proceeded to conduct another Phase II comparing that vehicle component to a ‘real’ placebo. Needless to say, there was no difference. It was all placebo effect, all along.

    The company no longer exists. To this day, I still don’t understand what they were thinking when they decided the vehicle must be active. (I wasn’t with the company by that time.) But, to their credit, at least they went ahead to conduct the second Phase II to try to prove their contention, and they admitted their failure when that trial showed no benefit over ‘real’ placebo.

  17. jmm says:

    Thanks for your replies and literature pointers. It seems to me that at least for pain, although more science is definitely needed, it seems quite likely that for at least some patient subsets there may be a significant physiological component to the placebo effect. If this turns out to be true, then I see no reason ethical medical practice should not exploit that fact to provide a genuine reduction in pain, so long as the placebo-based practice does not involve risk. Research protocols could identify suitable patient subsets for this. This would indeed be “placebo medicine”, based on sound scientific principles.

  18. daedalus2u says:

    jmm, I disagree with you. If there is a physiological placebo effect, and that effect can be reliably implemented with little or no risk, then adding that placebo effect to what ever the actual treatment is should be the “standard of care”. The problem is when those giving out placebos do so instead of an effective treatment. Most harm from the placebo treatments that Dr Novella describes are not due to side effects or the cost of the placebo treatment, but from the natural course of the disease which is inadequately treated by placebo. The effects of placebos can be very idiosyncratic. What works for someone might not work for someone else, so coming up with a “standard placebo” protocol that works for everyone will be difficult if not impossible.

    There are treatments that I consider to be purely placebo, for example psychotherapy. That is treatment without pharmacologically or surgically active treatments, so if it is effective, the results are via a placebo-type mechanism (what ever that might be). I appreciate that psychotherapists don’t like their treatments to be characterized as “placebos” because to many people “placebo” means perceived positive effects due to bias and error, not something “real”.

  19. I disagree that demand always drives supply. I think that often marketing drives demand to meet a supply. CAM is all about marketing.

    And – most people think that demand for CAM is driven by dissatisfaction with mainstream medicine but that is not supported by evidence. The best survey on this questions (http://jama.ama-assn.org/cgi/content/abstract/279/19/1548) showed that dissatisfaction did not correlate at all with CAM use. What did was being ideologically favorable to CAM – which a cynical person might interpret as having fallen for the marketing.

  20. jmm says:

    daedalus2u, that assumes that an effective treatment exists for the condition in question. There are many conditions for which this is not true. In this case, the best that can be done may be to aim for maximum physiological placebo, even perhaps outsourcing to CAM if they are better at it.

    Actually, I think that is an interesting question. Has anyone studied, in a case like pain where physiological placebo may be important, whether CAM placebo, with the greater time spent with the patient etc., does better or worse than sugar-pill placebo administered by a doctor? That should be answerable.

  21. pmoran says:

    Stephen, I have never, ever heard a CAM promoter use the term “placebo medicine”. The last thing most of them want is to be understood in this way.

    I am fairly sure that I was one of the first to use use this phrase when referring to CAM, on these very pages, to express the generalisation that CAM works, to the extent that it works, as placebo. David Gorski recently adopted the expression.

  22. Steven, I agree with you to an extent about marketing. I even considered that while composing my comment.

    But the marketing driven demand is still driving or supporting supply. If the marketing fails or is ineffective, and the demand can’t be sustained, suppliers will eventually leave the market or go broke.

    My main point was that if you want to control a problem, you have a better chance of success if you focus on reducing or eliminating demand than if you try to reduce or eliminate supply.

    However, reducing or eliminating the marketing or using more effective counter-marketing (effectively delivering better information to consumers) may help in reducing demand (thus forcing suppliers out of the market), especially if the product or service has no real intrinsic value. (as in CAM)

    Also consider that a large portion of marketing is not about driving new demand or creating new customers, but about shifting existing demand to your supply. (Market poaching: car manufacturers advertise mostly to get you to buy their cars instead of their competitors’ cars rather than to, say, convince subway riders to buy cars instead.)

  23. Peter – I never said that CAM proponents are selling their practices as “placebo medicine,” and I use that term as an appropriately derogatory term as you and David meant it to be used. When I wrote “has been termed” I did not mean to imply by CAM proponents – but I can see how that was ambiguous. (I modified my original post and added “by critics” to be more clear.)

    My point is that they are using placebo effects to justify some of their practices, but this makes no scientific sense.

    Actually, I think they want it both ways. They will do what they can to argue that their methods work, but when science shows they are no better than placebo, well then that just means they work through placebo effects, which should count also.

    Heads I win, tails I win.

  24. Diane Jacobs says:

    “My point is that they are using placebo effects to justify some of their practices, but this makes no scientific sense.”

    I think they (CAM proponents) use placebo response (which is scientifically real and physiological and undeniable), whether unwittingly or deliberately but definitely in an unstudied way, to justify their a-, anti-, pre- and pseudoscientific treatment constructs. I think that’s a misuse of something that ordinary people could be taught to harness, instead.

    Diane Jacobs

  25. daedalus2u says:

    Karl, that is exactly my thinking too. The best way to rid the market of CAM placebo based treatments is with something better. I think that something better is SBM based treatments that trigger the physiological placebo effect pharmacologically. I appreciate that this is a seemingly contradictory concept.

    If placebos actually do something, then there is a real physiological placebo effect. That real physiological placebo effect is mediated through physiology, and so is amenable to manipulation via pharmacological means. If we can understand the physiology behind the physiological placebo effect and are then able to invoke it pharmacologically, that pharmacologically invoked placebo effect will (no doubt) be more effective than any psychologically invoked placebo effect.

    CAM products work only through a placebo effect mediated through the psychological triggering of the placebo effect by whatever woo the CAM practitioners use. The psychological effects of that woo are idiosyncratic. A physiologically mediated placebo effect should be much less idiosyncratic (and may work on unconscious individuals).

  26. Newcoaster says:

    Great post Steven. sCAM uses language and marketing far more effectively than we do. It’s going up in the doctor’s lounge.

    However I do disagree with you when you state that a placebo is “It is everything other than a physiological response to the treatment.”
    There is evidence of specific physiological effects, mostly demonstrated with pain response producing endogenous opiates, which in turn can be blocked with naloxone.

    Acupuncture is the mother of all placebo stimuli….sticking needles into flesh does gets ones attention, but I think it does have a physiological effect in addition to the other non specific effects you described.

  27. Diane Jacobs says:

    I continue to struggle with the frame placed around the concept “placebo.”

    How do bloggers and commentators here feel about effects from interventions such as mirror therapy? Is it “just” placebo, and therefore discountable? Effects are certainly not from anything medical or surgical or pharmaceutical – they are strictly perceptual, and strictly with conscious aware patients, fully informed. Please see Mirror Therapy for Chronic Complex Regional Pain Syndrome Type 1 and Stroke.
    Is this kind of “placebo” treatment (and result) “good” or “bad”?

    Or how about this? Psychologically induced cooling of a specific body part caused by the illusory ownership of an artificial counterpart. Is induction of kinesthetic illusion classified as “merely” placeboic? Would that sort of elicitation of placebo effect, or response, be good or bad, ethical or unethical?

    Here is a film (featuring Moseley, primary author of the article) called Body Identity, from Aus, which is about the investigation of kinesthetic distortions hardwired into the brain and transitory ones elicited by inducing perceptual illusion.

    It looks to me like there are some pretty decent applications in all this. None of the methods used to gain a placebo effect are medical or surgical or pharmaceutical (except for the potential placebo effect, i.e., psychological relief, to be gained from surgical amputation in the case of apotemnophilia). Instead effects (placebo?) are induced by manipulating kinesthetic awareness and perception. Is this potentially useful therapy? It’s not CAM (in my book) in that there are no implausible treatment concepts to deconstruct here. Nor is it by definition “medicine.” Yet it’s “scientific”…

    As a PT I’m excited by the non-medical, non-surgical, non-pharmaceutical therapeutic possibilities in all this. Does that make me a sCAMster?

    Diane Jacobs

  28. jmm says:

    daedalus2u, maybe some future pharmacological manipulation of the physiological effect will be more effective than psychological manipulation, but maybe not, and maybe it will also come with less pleasant side effects. Drug responses can also be highly idiosyncratic, and it wouldn’t surprise me if they were even more so than psychological manipulations. In the meantime, I agree with newcoaster than acupuncture is the mother of all placebo stimuli, so since it is relatively harmless, why not use it, knowingly as an effective placebo, for conditions for which no more effective treatment exists?

  29. twaza says:

    Steve, thanks for the pointer to the JAMA paper, which is followed by a long list of interesting papers that have cited it.

    I don’t like to be picky, but: when you said “I disagree that demand always drives supply”, I think you meant that the root cause is not innate demand, but demand artificially stimulated by marketing.

    The paper you cited, and the papers that have cited it seem from the titles to provide evidence that people with a certain outlook on life are predisposed to CAM. And, they would be most susceptible to being exploited by sCAMsters.

    So, yes, I agree that to tackle demand, we need to tackle marketing.

  30. Scott says:

    How do bloggers and commentators here feel about effects from interventions such as mirror therapy? Is it “just” placebo, and therefore discountable? Effects are certainly not from anything medical or surgical or pharmaceutical – they are strictly perceptual, and strictly with conscious aware patients, fully informed. Please see Mirror Therapy for Chronic Complex Regional Pain Syndrome Type 1 and Stroke.
    Is this kind of “placebo” treatment (and result) “good” or “bad”?

    Speaking for myself, I don’t think whether or not it’s “placebo” is a relevant question with respect to whether or not it’s an appropriate treatment. The more important question is whether or not the patient is fully informed, and in particular whether whatever effect it has still works if the patient is fully informed. (That is, beyond the normal questions of safety, efficacy, and cost.)

    So for mirror therapy, the patients are fully informed, it provides significant benefits, and I would strongly suspect that the risks and costs are small (though the link provided does not address this point). Sounds like a good treatment to me.

  31. Newcoaster – this point requires further clarification. From the point of view of how placebo effects are measured in clinical trials (i.e. their operational definition) they include everything but a physiological response to the treatment – that is the variable to be isolated.

    What you are referring to are physiological responses to the act of being treated, or belief in the treatment, or the therapeutic interaction with the practitioner. I did not mean to imply that these do not involve a physiological component.

    These are best understood, and probably most relevant, to pain (likely because of natural endorphins). Second most relevant is stress and stress related disease, like heart attacks.

    I have no problem with psychological interventions for pain and relaxation/stress reduction interventions for heart health and other stress-related disorders. I would not consider these “placebo” effects – but the lines can easily be blurred to create confusion.

    In fact much of the confusion here is due to confusing the operational definition of “placebo” in a clinical trial with all psychology/stress related interventions and effects.

  32. daedalus2u says:

    If we are going to use the term “placebo”, we need to have a consistent definition for it. The definition that I use is a therapeutic treatment that does not have a pharmacological or surgical mechanism.

    Dr Novella could you give us a clear definition of “placebo” the way you are using it? Stress reduction is not a pharmacological or surgical intervention. What about stress reduction makes it not a placebo?

    If you are going to allow the definition of “placebo” to be morphed to not include stress reduction simply because stress reduction is actually effective and produces physiological effects, then you have to allow acupuncturists to say that sham acupuncture works too, simply because it works as well as acupuncture and slightly better than doing nothing. Stress reduction works better than doing nothing too.

  33. Diane Jacobs says:

    “If we are going to use the term “placebo”, we need to have a consistent definition for it. The definition that I use is a therapeutic treatment that does not have a pharmacological or surgical mechanism.”

    Thank you for that deadalus, my thoughts are the same; “placebo” pretty much explains and accounts for everything I do as a PT, cognitive behavioral therapy, manual treatment, anything electrical, etc. Now, this means that either everything a PT does from asking enfeebled patients to strengthen themselves through graded exposure and the sophisticated hand-holding that goes with it, to all the skilled manual therapy (minus all the chiro treatment concepts) to reduce pain and increase functional pain free movement in the short term, are all placebo in their effects, effects for which measuring tools exist, measurements that can become part of the evidence base, and are science-based. So, where does that leave placebo itself as a concept – Is placebo response in general, a good thing or a bad thing? I think the term itself could use a good dust-off and update, maybe a “rehab” of the definition. Especially in the light of neuroscience/pain science.

    “Stress reduction is not a pharmacological or surgical intervention. What about stress reduction makes it not a placebo?”

    I wonder too. In a patient situation pain and stress are all conflated together neurologically. Not much can happen to relieve pain endogenously until stressors are reduced. Stress reduction (especially stress around pain) doesn’t happen until pain can be relieved, usually through pharmacology, but sometimes just through education around the fact that non-medical pain, although bothersome, is not dangerous, i.e., “hurt does not equal harm.” Reassurance, in other words, leading to more placebo response, which is in the brain, and which has to do with awareness in this case, less confusion and worry, less “stress” on one level leading to less “pain” on another. If this can be conveyed appropriately, right there is (at least IMO) an ethical use of placebo effect, i.e., eliciting a placebo response, without ever having to use the term itself in front of a patient because its public meaning is still so murky.

    Diane

  34. daedalus2u says:

    I would include electrical and manual interventions in non-placebos, analogous to surgery, provided there is a nexus of phsyiology connecting the intervention to the anticipated result (but this is tricky). When there is no nexus of physiology, as in reiki and chiropractic, then it is a placebo.

  35. Diane Jacobs says:

    “I would include electrical and manual interventions in non-placebos, analogous to surgery, provided there is a nexus of phsyiology connecting the intervention to the anticipated result (but this is tricky). When there is no nexus of physiology, as in reiki and chiropractic, then it is a placebo.”

    Beyond ordinary exteroception, combined with attention, and whatever else goes on inside the brain in those moments, I know of no “nexus of physiology” other than placebo response, for pain downregulation at least.

  36. twaza says:

    I am depressed. I have just finished reading the Kings Fund report “Assessing complementary practice: building consensus on appropriate research methods”, which provoked Steven’s post.

    I was depressed by the patronising (should that be matronising?) tone of the report.

    I was depressed by the amount of pure nonsense in the report. For example, what does this sentence mean?

    There are no straightforward or right or wrong research methods for complementary practice.

    What is the report trying to say? The report tells us that 200 conference participants reached consensus on this nonsense.

    And I was depressed by the misuse of sciencey terms. One of the things that I learned quite quickly about CAMsters is that they love to use sciencey terms to give the impression that they know more than they really do.

    Steven has addressed the rebranding of pragmatic (effectiveness) trials, so I won’t discuss this clever rhetorical device, but will discuss the use of a couple of other sciencey terms, before explaining the major source of my depression, the report’s misunderstanding of what pragmatic trials measure.

    The report mentions “signal to noise ratio” twice, and manages to convey two or three wrong meanings of the term. The true meaning of the term is the ratio between a signal (meaningful information) and the background noise. Wikipedia explains it nicely.

    In 32 pages, the report mentions regression to the mean and temporal changes (i.e. the natural history of the disease) 10 times. It mentions the word bias zero times. The report gives the impression that they have just discovered these effects and would like you to be impressed that they know how important it is not to be caught out by them. This is basic stuff for science-based medicine, but really important for CAM because it explains why testimonials and case reports cannot provide useful evidence. However, they don’t say this, so a CAMster could be forgiven for not getting this inconvenient piece of information.

    There is not the glimmer of a hint of understanding that these two phenomena (regression to the mean and temporal changes) are biases that cause systematic distortions in outcome measurements. There is even less evidence that they understand that regression to the mean and temporal effects are just two of many other biases. (Wikipedia has a long and incomplete list.)

    Because the authors of the report are blind to the existence and risk of bias in outcome measurements, they wrongly assume that a pragmatic trial, or, as they call it, an effectiveness study provides bias-free outcome measurements. To show why this is important, I need to explain what a controlled trial measures.

    All outcome measurements in a controlled trial are the result of 4 types of effect:

    1. The specific effect of the intervention (if the intervention is a well chosen placebo, the specific effect is zero).

    2. The non-specific (or placebo) effects of the intervention.

    3. Biases (systematic errors in the measurement)

    4. Natural variation.

    We assess natural variation by making the outcome measurement in a number of people; the more people in the study, the more accurately we can assess the size and character of the natural variation. Statistically this is usually expressed as a 95% confidence interval. The other three effects are expressed as measures of central tendency.

    A placebo controlled trial allows us to assess the specific effect of the test intervention. It is the difference between the outcome measurement in the test and placebo groups. The non-specific (placebo) and bias effects on the outcome measurement are the same in the two groups, and cancel out when you take the difference. The specific effect in the placebo group is, by definition, negligible. So, the difference is exactly the specific effect of the intervention

    A pragmatic or effectiveness study compares an intervention with usual care, or no treatment. In this situation the two groups are likely to have different specific effects, different non-specific (placebo) effects, and different bias effects. So, the difference in outcome measurements includes the difference in specific effects, the difference in non-specific (placebo) effects, and the difference in bias effects.

    Edzard Ernst has a paper with a graphic that explains this very well.

    For a theatrical intervention such as acupuncture or chiropractic, the risk of biases such “see what you want to see” and “say what you think the researchers want you to say” is not negligible. (Cognitive scientists have sciencey sounding jargon for my slightly pejorative terms.)

    So a pragmatic (effectiveness) study measures specific effect + non-specific effect + bias effect. And you have to use other information to estimate the sizes of each of these effects. It is cavalier and wrong to assume that bias effects are unimportant. There is good reason to believe that they are the most important component of outcome measurements for theatrical CAM interventions.

    This mistake, this blindness to the importance of bias undermines the whole report.

    An alternative explanation of the report’s approach is that they include bias effects with placebo effects. If they do, this is woolley thinking. A pure placebo effect is something you would want to exploit, and you would be willing to pay for it. You would not want to exploit or pay for an impure placebo effect that includes “see what you want to see” and “say what you think the researchers want you to say” biases.

    Pragmatic trials have an important place in science-based medicine. When randomized controlled trials have shown that an intervention can have a useful effect under trial circumstances, the next question is “does it work in usual practice?”. A pragmatic trial can provide good evidence that the intervention does or doesn’t a useful effect, provided it builds on previous evidence that the treatment can have a useful specific effect. But, when placebo controlled trials show that a CAM has no useful specific effect (e.g. acupuncture), a pragmatic (effectiveness) trial measures the size of the placebo plus bias effects. And we have to use other information to assess which of the two effects is most important.

  37. twaza says:

    Sorry to go on, but I need to add two more remarks.

    Ligation of the internal mammary artery to treat angina passed the pragmatic (effectiveness) test, but failed the placebo-controlled trial test. Surgeons don’t practice the operation any more.

    Acupuncture for back pain passes the pragmatic trial test, but fails the place-controlled trial test. It is still recommended.

    Why the double standard?

    Another thing that was very depressing in the report was the repeated call for more research. But there is no mention of the need to take account of the research that has already been done. Do they not know why systematic reviews are important? Or, do they know that systematic reviews might provide inconvenient answers?

  38. daedalus2u says:

    Diane, for something like massage, there is increased flow of extravascular fluid. Virtually all cells get glucose from the extravascular fluid and not directly from the blood. If pain is due to those cells getting insufficient glucose due to insufficient extravascular fluid flow, then massage may have a non-placebo effect via improved circulation. I suspect that some of the pain of fibromyalgia can be due to this. Massage may also have a placebo effect through stress reduction and neurogenic triggering of improved circulation in places other then where the massage occurred. Sorting these two out is difficult but important in determining the details of how the intervention is done. Manipulations to increase lymph flow are likely different than manipulations to invoke systemic stress reduction. A gentle caress might produce stress reduction and systemic effects while having no direct effect on lymph flow.

    twaza, you are leaving out the case where a placebo has specific effects, for example psychotherapy. If a well chosen placebo has zero specific effects, how do you know if a placebo is well chosen or not? Is acupuncture with toothpicks a well chosen placebo or is it an effective treatment? There is no way to tell by doing studies only involving acupuncture and toothpicks. You have to understand the detailed physiology behind both interventions to know if one or the other is a placebo or not. Doing clinical trials while not understanding the physiology behind acupuncture is like trying to test pills without knowing what compounds they contain and how they interact with physiology. Are they pills with real pharmacologically active compounds or are they simply pills with pharmacologically inert compounds? This is like what Dr Hall calls tooth fairy science. If you don’t know if the pills contain anything active, you can’t tell by comparing them to something else you don’t know is active.

  39. twaza says:

    daedalus2u, what one means by placebo is a matter of definition. There is no consensus on the definition, pehaps because articulating a definition that does not lead to conceptual problems is difficult, and cleverer people than me have given up.

    By my definition, a placebo has no specific effect, and placebo effects are separate from biases. Steven’s definition of placebo effects includes biases. I tried to explain why this is inconvenient. Your definition allows placebos to have specific effects. I think this is also inconvenient.

    I would distinguish between the concept of placebo, and the actual intervention used as placebo. If the chosen intervention has a specific effect, then it is not a very good placebo. For example, drugs to treat skin conditions are often incorporated in a cream or ointment, called the base. A controlled trial to assess the effect of the drug would typically use the base as the placebo. But the base may sooth itches and relieve dryness. It would then not be a good placebo for that experiment if the drug was to treat a dry, itchy skin.

    The King’s Fund report says

    This is a word that is imbued with meaning from its use in the field of pharmacological and traditional research. In this context the placebo effect is generally seen as a non-specific effect, unrelated to the treatment, which should be ‘subtracted’ from the overall treatment effect in order to assess the effectiveness of the intervention under scrutiny. This is entirely valid. However, the placebo effect can also be considered in broader terms and seen rather as the contextual effect, reflecting the contribution that the context for the intervention (the physical setting and the therapeutic relationship) makes to its effect.

    .

    I think what the Kings Fund report is trying to say is that, if you intentionally use certain causes of placebo effects, the placebo effects become specific effects.

    Investigating the placebo effect is really tricky, because you can’t do a placebo controlled trial of a placebo to isolate the placebo effect. The best experiments isolate components of the cause of the placebo effect, or look for some kind of dose-response relationship.

    I must confess I don’t know what would be a suitable placebo for a placebo controlled trial of talking therapies. A work-around might be to consider talking therapies to be one category of cause of placebo effects.

    I think it is useful, but not necessary, to know how an intervention works when you assess the results of a trial. There are many drugs that work for reasons we do not fully understand, but are accepted as effective.

    When you are looking for a new treatment, you would select the most plausible candidate to test. Plausibility comes from basic science exploring mechanisms of action. If a rigorous-seeming trial finds that a treatment works, the results should be treated with skepticism proportional to the plausibility of the underlying mechanisms and degree of corroboration by independent studies.

    I don’t think that we disagree on anything other than language, and what definitions would be convenient for scientific investigation.

  40. pmoran says:

    “By my definition, a placebo has no specific effect, and placebo effects are separate from biases. Steven’s definition of placebo effects includes biases. I tried to explain why this is inconvenient. Your definition allows placebos to have specific effects. I think this is also inconvenient.”

    I agree. We still have a muddle, though. Why accept truly beneficial outcomes as an “effect” of placebo, but not the biased reporting of benefits that occurs when the patient believes they are receiving a powerful treatment from the nice doctor?

    For starters we should probably start talking about placebo (patient) “responses” or “reactions” rather than “effects” so as to home in upon a definition within which the placebo itself “does” nothing.

  41. pmoran says:

    For the moment I have been reduced to using clumsy phrases like “non-specific beneficial effects of medical interactions including placebo responses.” It recognises that benefits may derive from either the supposedly therapeutic activity employed, or from peripheral matters such as explanation and reassurance, and the power of suggestion.

  42. Diane Jacobs says:

    “For starters we should probably start talking about placebo (patient) “responses” or “reactions” rather than “effects” so as to home in upon a definition within which the placebo itself “does” nothing.”

    Yes, pmoran, I agree that’s where we should head. Patrick Wall, one of the fathers of pain science, said (roughly paraphrased), “Placebo is not something done to a patient, it is something elicited from one.”

  43. Diane Jacobs says:

    Here is a video by David Butler PT, Australia, demonstrating use of mirror therapy, not just for phantom upper limb pain but also some other very gnarly kinds of hand pain. Butler wrote a provocative book about ten years ago, The Sensitive Nervous System, helping a chain reaction accelerate within the profession. It has made us revisit the idea that placebo is always or forever a “bad” thing… Placebo may be, from a medical standpoint and definition, a research foe, but it can certainly be a clinical friend. It is a research foe if one is trying to discern some sort of isolated effect from a treatment technique, like acupuncture, like manual therapy. It’s a clinical friend if it can be acknowledged, harnessed, used – or rather, I should say – elicited, properly and ethically.

  44. FelixO says:

    You quote this section from the Pulse article:

    ‘As long as findings from research can provide confidence in the positive effect of the physical intervention at the heart of the treatment, then any added benefit brought by the therapeutic relationship and the context for treatment should count as part of the treatment effect,’ the report says.

    ‘For complementary therapies such a holistic approach to effectiveness should be adopted by bodies such as NICE, when comparing cost-effectiveness across a range of treatments.’

    And you comment that:

    The “physical intervention at the heart of the treatment” is functionally the same thing as – non-specific placebo effects. They want to take a “holistic” approach to evidence (another useful marketing brand), meaning they get to decide what the evidence means.

    I do not agree with your interpretation of the phrase “physical intervention at the heart of the treatment”.

    My reading of this quote, taken on its own, is that it is saying:

    “as long as the treatment is known to be beneficial (as shown by research) then we should be allowed to include the added benefit of our nice offices, fancy stories and long consultations when comparing against ‘the standard of care’”

    I will now go off and read the report which can be downloaded here:
    http://www.kingsfund.org.uk/document.rm?id=8425

  45. FelixO says:

    Hi,

    the report defines the following terms:

    Efficacy -the question of whether the specific intervention works – and how

    Effectiveness – the question of whether the whole intervention including all the non-specific effects generates positive outcomes

    To a layman such as myself this sound like redefining the language to suit your requirements (i.e. weasel words!)

    Is there any recognition of the distinct meanings of these terms in medical literature?

    Thanks

  46. twaza says:

    FelixO, the standard definitions of efficacy and effectiveness in science-based medicine are:

    Efficacy: A measure of the benefit resulting from an intervention for a given health problem under the ideal conditions of an investigation. (This is what you measure in a typical randomized controlled trial (RCT))

    Effectiveness: A measure of the benefit resulting from an intervention for a given health problem under usual conditions of clinical care for a particular group. (This is what you measure in a pragmatic trial. A pragmatic trial is a particular kind of RCT)

  47. twaza says:

    the rebranding of pragmatic trials continues: see http://www.bmj.com/cgi/content/extract/339/sep01_2/b3335

Comments are closed.