Articles

Testing the “individualization” of CAM treatments

One of the common claims of alternative medicine practitioners is that they individualize their treatment while conventional medicine treats all patients the same. This is nonsense on several levels, but it is also a common excuse for why randomized clinical trials cannot be performed, or cannot be viewed as reliable evidence, in evaluating some alternative therapies. However, some trials have been done that attempt to account for this supposed individualization of therapy, and generally they have failed to show a benefit to the supposedly individualized approach. One of those, involving Traditional Chinese Medicine (TCM) was recently discussed by Edzard Ernst, one of few, and most productive researchers in the CAM field applying an evidence-based approach:

Matthias Lechner, MD, Iva Steirer, MD, Benno Brinkhaus, MD, Yun Chen, CMD, Claudia Krist-Dungl, MS, Alexandra Koschier, MS, Martina Gantschacher, MA, Kurt Neumann, MS, and Andrea Zauner-Dungl, MD. Efficacy of Individualized Chinese Herbal Medication in Osteoarthrosis of Hip and Knee: A Double-Blind,Randomized-Controlled Clinical Study. The Journal of Alternative and Complementary Medicine. 2011;17(6): 539–547.

First, why is the notion that CAM is somehow more individualized than conventional care total nonsense? Well, to begin with, any good doctor considers the particular history, physical examination findings, diagnostic test results, known medical problems, and concurrent therapies of each patient. If individualized treatment simply means considering the unique circumstances and values of the particular patient you are treating, then all good medicine is individualized. That concept is even built into the common definitions of evidence-based medicine:

Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.

[EBM is] the integration of the best research evidence with our clinical expertise and our patient’s unique values and circumstances.

However, when CAM practitioners claim that formal scientific research and science-based medicine ignore individual variation, they are usually referring to the practice of studying groups of patients under controlled conditions and then applying lessons learned from those studies to the care of individuals. They claim that since we are all snowflakes, utterly unique, what is learned from groups cannot tell us anything useful about individuals.

This argument fails most dramatically on the simple evidence of the tremendous effectiveness of science-based medicine. Tens of thousands of years of looking at patients one by one and trying to figure out based on those experiences what to do for the next patient failed to control or eliminate any common diseases or meaningfully improve the length and quality of human life and health. A couple of centuries of gradually relying on formal scientific research instead of such haphazard individual experiences has wiped out or dramatically reduced many common and deadly diseases, nearly doubled average life expectancy (at least for those who can afford to use science-based medicine), and in many other ways unequivocally improved our health. It requires deep self-delusion to deny that science works better than prescientific, unstructured ways of figuring out how to preserve and restore health.

On a more theoretical level, however, consider this. Statistics can indicate the probability of winning or losing a game of chance very precisely, on the group level. As an individual, of course, you can’t know with certainty whether you will win or lose if you go to Las Vegas and play these games because these statistics only describe what happens over the course of many trials, that is what will happen on average when large numbers of people play. They don’t predict for you, as a unique individual, what your results at blackjack or roulette will be. This is very much like the situation in science, where controlled studies look at outcomes on the level of the group but can’t precisely predict the results of a therapy in an individual patient.

And yet, casinos make enormous sums of money by playing the odds and expecting that most people will lose. This is a successful strategy for them. And many people lose, some with disastrous personal consequences, by imagining that they are exempt from the statistical rules that apply to groups and that some special individual factor will allow them to beat the odds. Choosing to believe that general statistical principles don’t apply to them because they are special and unique ruins people’s lives in Vegas, and in medicine. Choosing to play with or against the odds, as defined by formal research, is no guarantee, but it is much more likely to lead to a good outcome than imagining the odds don’t matter because each of us is unique.

Finally, when a CAM practitioner claims they treat every individual patient based on that patient’s unique characteristics and that science-based medicine treats all patients as if they were the same because it bases treatment guidelines on research done on groups, they are simply mistaken. If a series of controlled studies indicates that Treatment X is better than Treatment Y for a certain diseases, and if I use this to support giving patients with that disease Treatment X most of the time, then yes I am applying information gained from population research to individuals. I am playing the odds.

However, if a CAM practitioner looks at a patient and evaluates their particular characteristics and then decides on a specific treatment, where do they come up with the connection between the patient’s characteristics and the treatment? They use their personal experience, gained from seeing what happens with prior patients, or they use rules laid down by other practitioners based of their own experiences, or they rely on general rules based on the theoretical ideas behind the style of therapy they use. In other words, they extrapolate from observations made on other patients to the individual they are currently treating.

This is exactly the same as what a science-based practitioner does with one important difference: the generalizations that scientific medicine applies to individual patients come from formal, controlled research designed to compensate for the unreliability of individual observations and judgments, whereas the generalizations used by CAM come from informal, unstructured observations with no control for bias or the many common errors that mislead us when we study disease. CAM practitioners are using generalizations based on the study of groups to decide how to treat each new case, they are just relying on poorer quality group evidence.

Ok, so how does this apply to the clinical study that looked at supposedly individualized TCM herbal therapy for arthritis of the knee? Well, the study started by randomly assigning patients to either receive individualized herbal treatment based on the judgment of experienced practitioners in each case, or a standard herbal mixture believed, again based on past experience with groups of patients, not to have any benefits for arthritis. The experimental formula consisted of a number of herbs selected in advanced based on TCM theory which the investigators expected might be useful for the kind of disease they were studying. However, individual patients received particular combinations of these ingredients based on the judgment or practitioners at the time they were evaluated.

The control treatment (not really a placebo since it contained chemicals which might or might not have real physiological effects, since none have been thoroughly evaluated in scientific studies) was a collection of herbs not believed to have benefit for arthritis based on prior experience and TCM theory It was made to taste similar to the herbs pre-selected for the experimental treatment to help make it harder for patients to know which they were receiving. I can already hear the complaints of some herbalists that this makes it an inappropriate control since taste is one of the guiding principles for the use of herbs in some approaches to herbal medicine. I’ll leave that pseudoscientific objection aside for now since it’s not directly relevant to the point here.

Baseline characteristics were similar between the two groups of patients, and randomization and blinding appeared to be properly conducted. Overall the study was well-done methodologically, with a formal accounting of patients lost to follow-up and a reasonable effort to use standard and predefined outcome measures.

So what were the results? Well, as is usual in a study looking at a subjective measure like pain, all patients improved. There was, however, no difference between those who received individualized treatment and a random herbal concoction not expected to have any effect on arthritis. This most likely indicates nothing happening here other than nonspecific effects associated with participating in a trial, including placebo, regression to the mean, the Hawthorne effect, and all the usual suspects that fool us in clinical trials, and in real life.

This study nicely illustrates several of the issues associated with supposed individualization of CAM treatment. First, it shows that such treatment is not, in any meaningful sense, any more individualized than good quality science-based medical treatment. Choosing a selection of herbs based on previous experience, historical use, tradition, and the unscientific theories of Traditional Chinese Medicine, and then selecting which of these herbs to give each patient based on the same prior experience and unscientific theory, is still applying generalizations based on groups to individuals. It simply uses generalizations based on unreliable sources of data.

The study also illustrates that individualizing therapy in this way doesn’t add any efficacy to the treatment. Not surprisingly, the study showed, as the others mentioned early have as well, that tailoring treatment to individuals based on generalizations derived from biased and unreliable sources of information leads to a therapy no more effective than randomly picking herbs out of a hat.

The difference between effective science-based medicine and ineffective medicine of any kind, conventional or alternative, is that the general principles used to guide therapy are derived from formal, controlled research that compensates for the weaknesses in our individual, informal, and unstructured judgment. If individualized medicine is just a code for using informal group observations instead of structured scientific ones to guide therapy, than it is not surprising that it doesn’t work any better than just making up a treatment haphazardly with no guiding principles at all.

Posted in: Science and Medicine, Traditional Chinese Medicine

Leave a Comment (18) ↓

18 thoughts on “Testing the “individualization” of CAM treatments

  1. cervantes says:

    That said, it is true that RCTs in general often mask heterogeneity of treatment effects. In fact, although there may be a benefit from a treatment on average, some people could actually be harmed by it, while others will simply waste time and money; conversely, a negative trial can mask a real benefit for a subgroup.

    One of the great challenges which we are now taking on in science based medicine is indeed to improve our ability to target treatments appropriately to individuals. There are both practical and methodological difficulties to be overcome, but fortunately, one of the lesser known components of the Affordable Care Act was creation of the Patient Centered Outcomes Research Institute (http://www.pcori.org/), which has a guaranteed stream of funding to support the needed methodological innovation and research.

    Again, it’s fine to take on quackery here but I still think there’s room for more critical thinking about the state of science based medicine, including its translation into medical practice in the real world. You leave an opening for the woomeisters by failing to acknowledge the grain of truth that is often embedded in their claims.

  2. rork says:

    I’d add to what cervantes said: RCTs are sometimes designed to measure variables that reveal the cause of the heterogeneity of effects. We are actively searching for interactions (in the statistical sense: group A shows more difference for the two treatments than group B). Maybe fine-tuning exactly which breast cancer people get Herceptin (trastuzumab, it’s anti-ERBB2) is a good example. Some of that is in phase III studies, and it’s very common in phase II’s (where we do allot of wishful thinking).

    I do often see research docs make the mistake of thinking that if treatment 1 doesn’t beat treatment 2 for a given indication it means treatment 1 can be dismissed. They forget about the possible benefit to a subgroup that cervantes mentioned. A variant is that I have 2 subgroups from the get-go, and my new treatment doesn’t make group A do any better than group B, and thinking the new treatment can be dismissed. That’s testing a main effect rather than the interaction – it’s a mistake. Maybe with the alternative treatment A’s do much worse that B’s.

    I completely avoid “individualized”, even when there’s a grain of truth in it. Like when you adjust doses based on the levels of a marker you monitor. For me, mitotane against adrenal cancer is a famous example, though very rare. No two people get the same dose scheduled, but it does follow an algorithm. (It’s an awful thing to use, being very close to DDT. We are working on new weapons. The TCM folks – not so much.)

  3. @cervantes

    Arguing, as I do, that RCTs are superior to uncontrolled, informal observations is not the same as arguing that RCTs are perfect, which you seem to be implying I claimed. There is no question RCTs have many weaknesses, and those of us writing here regularly acknowledge that. They are simply better than the alternatives the “woomeisters” are offering.

    There is also no doubt that truly individualized therapy, which could effectively account for all the relevant variation between individuals in their illness and response to therapy, would be a great thing. We aren’t there yet,and it looks to be quite a ways off, but it’s certainly a worthy goal. That, however, has little to do with the fake individualization that characterizes TCM, homeopathy, and many other forms of alt med. Pretending to individualize therapy while really just extrapolating from past cases is not an improvement over relying on RCTs, with all their flaws.

  4. Harriet Hall says:

    Did you notice the wording of the abstract’s conclusion? “While the individual prescription consisting of medicinal herbs according to TCM diagnosis investigated in this trial tend to improve the osteoarthritis, the same effect was also achieved with the nonspecific prescription.”

    They put a positive spin on it “herbs tended to improve osteoarthritis.” A negative spin might be more appropriate: “This is not evidence that these herbs work. We have once again demonstrated the Hawthorne effect.”

  5. cervantes says:

    Dr. McKenzie, we have no disagreement on the substance. What I am saying is that in taking on the sCAMmers, we argue from a much stronger position if we acknowledge the limitations of the scientific evidentiary basis for medicine. We thereby demonstrate insight and humility, and can go on to explain how we are trying to improve our knowledge base, and see to it that it is properly translated into practice. And, BTW, there are major problems with the latter goal as well, which also would be suitable for discussion here.

    This is a theme of my comments here and I hope I am not becoming tiresome. But I feel these reminders are needed.

  6. qetzal says:

    @Harriet Hall

    That kind of language irritates me as well. They showed improvements in both groups in their study. That is NOT the same as showing that both treatments resulted in improvements. Whenever I see unwarranted conclusions like that, whether in a CAM context or in a mainstream scientific paper, I know that the authors are guilty of sloppy logic and/or biased reasoning.

  7. Purenoiz says:

    @cervantes

    “As soon as by one’s own propaganda even a glimpse of right on the other side is admitted, the cause for doubting one’s own right is laid.”

    People love a confident fool over an intelligent uncertainty. Unfortunately many lay poeple have been fooled into thinking science is no more than consensus building, popularity and propaganda. It is sad but true. That quote above, is from an evil madmen, but he understood people pretty well, at least in how to manipulate them.

  8. phayes says:

    @rork

    “I do often see research docs make the mistake of thinking that if treatment 1 doesn’t beat treatment 2 for a given indication it means treatment 1 can be dismissed. They forget about the possible benefit to a subgroup that cervantes mentioned.”

    :) http://en.wikipedia.org/wiki/Simpson's_paradox

  9. @Harriett

    Yes, the conclusion that because both groups improved the treatment must work, and so must the control, is just like the conclusion that because “real” acupuncture performs no better than sham acupuncture, both must work via the magical placebo effect.

    It is impressive the contortions researchers can go through to spin a finding that an intervention is no better than the presumably ineffective control into a finding that not only does the intervention work but “We’ve discovered a brand new effective treatment in the control!” You have to admire the optimism, if not the intellectual integrity of the effort.

  10. pmoran says:

    This argument fails most dramatically on the simple evidence of the tremendous effectiveness of science-based medicine. Tens of thousands of years of looking at patients one by one and trying to figure out based on those experiences what to do for the next patient failed to control or eliminate any common diseases or meaningfully improve the length and quality of human life and health.

    I liked the casino analogy, but am wary of the implication that clinical trial technology has been responsible for the successes of scientific medicine.

    Those methods can unquestionably help determine what methods work and optimize their use, but before that you have to have some idea what to test.

    If you think about it, major advances in medicine have actually arisen from one or more of these: serendipity, anecdotal observations, trial and error, advances in technology, and discoveries in the basic medical sciences.

  11. Harriet Hall says:

    @pmoran,

    “If you think about it, major advances in medicine have actually arisen from one or more of these: serendipity, anecdotal observations, trial and error, advances in technology, and discoveries in the basic medical sciences.”

    Actually, major advances in medicine have arisen from scientifically testing for clinical benefits based on those other things.

  12. @pmoran

    Yes, Dr. Hall has it right. We’ve always had plenty of ideas about how to fix disease, derived from luck, anecdote, trial and error, and so on. What we didn’t have was a sound method for figuring out which were true and which weren’t. A rigorous methodical approach to building a case for or against a hypothesis, from basic pathophysiology up through clinical trials is why science-based medicine works better than opinion-based or faith-based medicine. It is the methodology designed to compensate for the limitations of human judgement that makes the differences, and clinical trials are a major element of that.

  13. pmoran says:

    I understand that. I think we are saying much the same thing, and I was surprised when Harriet reacted.

    We create a problem for scientific skepticism when we equate “science” and “scientific validity” to specific methodology. We regularly pronounce strongly against certain concepts or treatment methods even though lacking that level of evidence.

    “Science” is lots of things — looking at the sky and predicting that it might rain today — observing in one’s kitchen that dilution and shaking does not enhance the physical or chemical or biological properties of anything.

  14. rork says:

    McKenzie: “There is also no doubt that truly individualized therapy, which could effectively account for all the relevant variation between individuals in their illness and response to therapy, would be a great thing. We aren’t there yet…”
    I challenge people using these words (“truly individualized”) to say exactly what they mean. My suspicion: you can’t. I’ll grant you ten million dimensional vectors for each patient if you wish. Now what do you do?

Comments are closed.