Shares

David Gorski recently pointed out that Science Based Medicine is going on five years. Amazing. That there would be so much to write about day after day comes as a surprise to me. Somehow I vaguely thought that ‘controversies’ would be resolved. Pick a SCAM, contrast the SCAM with reality as best we understand it, and, once the SCAM was found wanting, it would be abandoned. Why would rational, thoughtful people persist in the pursuit of irrational behavior, contradicted by the universe?

Ha. More the fool me. I would never have guessed that these SCAMs are harder to kill than Dracula (at least one version of Dracula). Stake them and back they come*.

I have tried to avoid repeating repeating information found in prior posts by myself and others, in part because I am lazy and in part because, well, I have said it before. Just look it up. I have come to realize (all too slowly) that each blog entry should be  self contained and that much of the old material is lost in the corn maze (an punning homophone) that is WordPress. Reading my second favorite computer reinforces the realization that each post often needs to be an island universe, complete in itself.

Medicine is difficult. At least I find it difficult. You have to first come up with a cause of the symptoms, understand the underlying physiology and then try and determine the best course of therapy. I have it easy in Infectious Diseases, a much more binary specialty. Patients are either infected or not, I have a therapy or not and I either cure them or not. Not a lot of wiggle room in the treatment of, say endocarditis or pneumonia, and very little in the way of bias and placebo to obfuscate the therapy. Or so I would think.

Much of the rest of medicine is not as clear cut. As a physician there are multiple ways to assess the potential efficacy of a therapy. One is a definitive randomized placebo controlled trial. Those do not seem to occur as often this decade as in prior times when giants walked the earth. Or so my faulty memory suggests.

Another way is to try and master the literature, a futile endeavor. In two areas of medicine I have more than passing knowledge: Infectious Diseases and SCAM’s. The nice thing about having a breadth and depth of reading is you can understand where new articles fit into an overall picture. What was the patient population studied, what were the weaknesses of the study, how applicable is the information to other populations? The downside of such expertise is I have to rely on the kindness of strangers in other areas of medicine, since I know next to nothing.

The breadth of knowledge has also made sources of information that I once trusted much more suspect. Popular media? No way. Newspapers and magazines so often get it wrong in areas of my expertise that I no longer trust them in areas where I know nothing (just about everything else).

Meta-analysis? Nope. Talk about disillusion. I used to rely on meta-analysis, but they are worse than laws and sausages, ceasing to inspire respect in proportion as we know how they are made. I still like the meta-analysis and systemic reviews as a nice overview of a clinical topic, but, for reasons we will see, I am hesitant to draw any therapeutic conclusions from any meta-analysis.

And the worst source is an anecdote. If I have said it once, I have said it once, the three most dangerous words in medicine are “in my experience.” I had always though of it in the context of physicians deciding on a therapeutic intervention, not so much from the patients perspective. I realized that for patients that is often the primary way they deciding to try a therapy, especially a SCAM therapy.

What would you do if you were a highly intelligent intellectual at one of the top newspapers and were curious about acupuncture? With the research capacity of, say, the Chicago Sun Times, you could get the skinny on any topic, right? Or instead, you could ask you readers if you should get acupuncture. Sigh. A microcosm of why, compared to SBM bloggers, Sisyphus  had it easy.

That method of seeking medical advice has been around since 480 AC as mentioned by Herodotos:

‘They had a very practical habit: on the marketplace passer-by’s give
advice to the patient about their ailment, in which case they can sometimes rely on personal experience or take advantage of someone else who suffered from the same symptoms. Nobody can just pass by without saying anything. It is obligatory to ask the patient what ails him. They bury their corpses in honey and mourn in the same way as in Egypt.’ (Thanks to Cees Renckens for the quote).

The more things change, the more they stay the same. Except for the honey part.

Everything I distrust about medical reporting seems, of late, to have found a home in the Atlantic.  Check out the headline: Biological Implausibility Aside, Acupuncture Works by Lindsay Abrams.

Really? News to me. I tend towrds the opinion that if a process is biologically impossible, then it is more likely that a) any positive results are likely to be spurious and due to bias and b) if bias is removed,  the old double blind placebo controlled trial, then most effects will fade to zero.

It often appears that journals and magazines do not even bother to read their own reports. A scant two years ago the Atlantic reported on the work of John Ioannidis and why most of the medical literature is suspect

“Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice?”

and that in clinical trials

“the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.

This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” “

One of the epiphanies of my understanding in how to approach the medical literature was when I discovered the works of Dr. Ioannidis. It would appear that Lindsay Abrams didn’t read the article about Dr. Ioannidis. More likely it was read and ignored. All the issues raised about the validity of the results of medical research never seem to translate into SCAM studies.

What was this earth shattering, high quality clinical paper that demonstrated after decades of lousy research that acupuncture works? Finally a large randomized placebo controlled clinical trial with good endpoints that demonstrates efficacy? Certainly all the prior trials that meet those criteria demonstrate acupuncture doesn’t work and that acupuncture is no better than placebo.

Readers of my entries know that my assessment of placebo: it does not alter  pathophysiology, just the perception of pain and disease, and then only a wee, clinically irrelevant, amount. I agree with the article that acupuncture is “nothing more than fancy ways of invoking the placebo.” And you know Crislip’s law: acupuncture effect = placebo and placebo effect = nothing, therefore the acupuncture effect = nothing.

I well know that interacting with a patient with a therapeutic intervention will alter the patients perception of what is occurring without altering the underlying pathophysiology. As the NEJM study revealed, if patients think they are receiving a therapy even when they are not, they will report their asthma is better even when the objective tests show no improvement. To my mind the placebo effect is no different than kissing a child’s boo-boo: it is subjectively beneficial although no objective changes occur to the boo-boo.  These human interactions are an important, if ineffective, part of the human relationships.

The Atlantic was referring to

“new, large study out of Memorial Sloan-Kettering Cancer Center, published in the Archives of Internal Medicine, cautiously suggests that there is indeed something more to acupuncture. A meta-analysis of 18,000 patients from 29 randomized controlled studies, it found that the treatment was more effective than controls in relieving back and neck pain, osteoarthritis, chronic headache, and shoulder pain. Significantly, it also found that real acupuncture was more effective than shams.”

Sigh. I see the word study, and I think oh good. Someone has done a clinical trial. Enrolled patients and compared, in a double blind manner, interventions against placebo. No. It is a meta-analysis. Calling a meta-analysis a study is not unlike a library declaring they acquired several dozen new books and magazines when their copy of Readers Digest arrives. Someone massaging preexisting data; hardly a study.  Nice for an overview of a topic, but worthless for drawing conclusions definitive conclusions.

When applied to real treatments, those based on reality and known physiology, the results of meta-analysis are often not predictive of well done clinical trials:

The outcomes of the 12 large randomized, controlled trials that we studied were not predicted accurately 35 percent of the time by the meta-analyses published previously on the same topics.

Of course, that presupposes that the topic is not tooth fairy science. If the interaction is based on nonsense, then the validity of an intervention is likely to be less. My bias, clearly stated here, is that one should be able to be able to reason up and down the pathophysiologic pathways of disease and treatment on basic biologic plausibility and basic principals.   It comes from being a truly holistic doctor, understanding infections at the level of amino acid substitution leading to drug resistance or disease susceptibility through to the interactions with the earths ecosystem.

That meta-analyses are unreliable should not be surprising. Most reality based clinical trials are flawed, and if you collect a series of flawed studies, you end up with one big flaw. The idea behind the meta-analysis, collecting all the cow pies into one big pile and making gold, is only as good as the quality of the initial pies. It is rare to collect Marie Callender’s.

I am not a true skeptic. I tend to avoid the words implausible and unlikely where I do not think they apply. As best can be determined, acupuncture, homeopathy, energy therapies, and a host of other SCAMs are, at the level of first principals, total bunk. And legitimate therapies cannot flow from pure bunk nor can a meta-analysis make pumpkin from cow.

Prior meta-analyses of acupuncture suggest they are substandard. The perhaps ironic and self referential Systematic Review of Systematic Reviews of Acupuncture published 1996-2005 suggests

“Systematic reviews of acupuncture have overstated effectiveness by including studies likely to be biased. They provide no robust evidence that acupuncture works for any indication.”

And as the Wikipedia points out

“The most severe weakness and abuse of meta-analysis often occurs when the person or persons doing the meta-analysis have an economic, social, or political agenda… If a meta-analysis is conducted by an individual or organization with a bias or predetermined desired outcome, it should be treated as highly suspect or having a high likelihood of being “junk science.” From an integrity perspective, researchers with a bias should avoid meta-analysis and use a less abuse-prone (or independent) form of research. “

This is no small issue if you happen to be a practicing clinician who takes the health and wealth of your patients seriously. Do I trust a meta-analysis that suggests linezolid is better than vancomycin for the treatment of pneumonia when the studies and the meta-analysis are sponsored by the company? It doesn’t completely abrogate the results of the trials, but it is well known that when the researcher has an axe to grind, the results will tend to sharpen the axe of the researcher.  All studies have bias, the question is how well they compensate.  I usually discount the results by about half when applied to the real world.

When approaching any study where the end points are subjective and at the limits of perception, two archetype need to be considered. The first is the endless ability of people to fool themselves.

At the turn of the last century a French Physicist, Blondlot, discovered N-rays. These rays were at the limit of detection and, like acupuncture, made no sense in the context of know reality. Multiple papers were published on N rays until a visiting professor, unknown to Blondlot, incapacitated the machine yet Blondlot still saw the N-rays.

N-rays were a purely subjective phenomenon, with the scientists involved having recorded data that matched their expectations.

Beware the N rays.

The other archetype is clever Hans, the counting horse, who actually was reading the nonverbal cues from his owner to know when to stop counting. Humans are probably more sensitive and skilled than horses at reading nonverbal cues, leading to

The observer-expectancy effect (also called the experimenter-expectancy effect,expectancy bias, observer effect, or experimenter effect) is a form of reactivity in which searcher’s cognitive bias causes them to unconsciously influence the participants of an experiment. It is a significant threat to a study’s internal validity, and is therefore typically controlled using a double-blind experimental design.

Both N-rays and clever Hans are examples of the importance of blinding the researcher and the patient and why if blinding is not adequate then any results are suspect, especially if the end points are subjective.

The studies of acupuncture that exclude N-ray and Clever Hans effects (i.e. really, really blinded) always show that acupuncture does nothing compared to fake acupuncture.

The classic interpretation of clinical trials when an intervention, say mammary artery ligation is no better than a sham mammary artery ligation, then neither are effective. I have been in medicine a long time and that career is littered with interventions that were discovered to be no better than sham/placebo and (sometime slowly and reluctantly) abandoned by the medical profession. That, of course, has yet to happen with any of the SCAMs discussed on this blog. While no hospital still offers mammary artery ligation for angina, Centers of Integrative Medicine and their ilk become more common.

So that is the background I use in approaching a clinical trial, a meta-analysis, and a paper on SCAMs. I admit I  have to read meta-analysis with Mr Gumby like understanding of the manipulation of the data, since statistics and I have never got along, despite my affinity for other forms of mathematics.

So how does “Acupuncture for Chronic Pain Individual Patient Data Meta-analysis” stand up?

Color me unimpressed.

Done by Acupuncture Trialists’ Collaboration. So no different than a meta-analysis by big pharma; they have a dog in the fight. That’s ok, we all have a dog in the fight. But any result is likely inflated.

After a search 31 of 82 trials met their criteria for inclusion.  Or they left out 51 trials, the validity of which I cannot say.   Would any of that information changed the outcomes of the analysis?  I don’t know; I lack the time to look at all the primary data in preparation of this entry.

They found that sham and real (as if there is a difference) acupuncutre were more effective than doing nothing. I would expect that. The boo-boo is being tended too; there will be salubrious results.

Did they remove the N-ray and Clever Hans bias? Nope.

“health care providers obviously were aware of the treatment provided, and, as such, a certain degree of bias of our effect estimate for specific effects cannot be entirely ruled out.”

Right there you know that the results are mostly worthless:

“Putting their results into context, the authors of the study explain that for a pain rating of 60 on a 100-point scale, follow-up scores decreased to around 43 for those had received no treatment, 35 for those who had received fake treatment, and 30 for those who received acupuncture. This translates into a 50 percent reduction in pain for the acupuncture patients, and only 30 and 42.5 percent reductions for the control and placebo groups, respectively.

It is impossible to measure pain objectively (Radiolab did a great piece on this last week), and the difference in pain reduction between sham and true acupuncture, though statistically significant, was small. But the authors’ methodical elimination of biases, coupled with their massive sample size, give weight to their findings.

The biases are there probably more enough to account for the results of real being better than sham acupuncure, the effect inconsequential clinically, and a massive sample size of cow pies only leads to one big cow pie indeed.

Dr. Strawman then weighs in:

Should the lack of biological plausibility lead us to reject compassion and empathy as a means to help improve our patients’ health?

How about

Should the lack of biological plausibility lead us to reject a costly and worthless therapy with known complications and instead use our compassion and empathy in a manner that is not based on lies?

The Fail blog needs to link to the Atlantic. It is a perfect match.

That is not all that is new about acupuncture of late. The Cochrane reviews continues to prove they have no standards or they are really getting bored. They did a review of mumps treated with acupuncture. Really. They found one study. Honestly. They meta-analyzed away:

We could not reach any confident conclusions about the efficacy and safety of acupuncture based on one study. More high-quality research is needed.

They had the same conclusion with insomnia  and endometriosis.

Of course, pain, mumps, insomnia and endometriosis all have the same underlying pathophysiology, no wonder it ‘works’ for so many diseases. Somehow  I don’t think we need to do any research on using acupuncture to treat mumps, much less do a review.

Not unsurpizingly when there is double blinding, analgesic effects disappears. This was an interesting study since they treated acute pain rather than chronic pain patients who

gather a lot of information about different pain treatments and firmly believe in different therapies.

They tested the ability of acupuncture, sham or real, to effect acute pain (cold and capsaicin). It didn’t. While the real acupuncuture worked slightly better than sham for capsaisen-induced pain, the effect, like a Vickers results, “occurred mainly in a rating range that seemed irrelevant to clinical pain.”

In another study in the same issue of Pain there is no difference between sham (random punctures), ‘real’ or placebo (fake punctures) acupuncture for low back pain, but all three are better than conventional therapy. The big flaw in the study is that they used hardened acupuncturists with a mean of 8.5 years of practice who were not blinded. While not statically different, real performed better than sham better than placebo, suggesting a clever Hans effect.

Like the asthma study in the NEJM, the best one can conclude is that acupuncture is that it is another beer goggles of alternative medicine: it convinces the wearer the disease looks better than it actually is.

* I am avoiding a Whack-a-mole metaphor.

Shares

Author

  • Mark Crislip, MD has been a practicing Infectious Disease specialist in Portland, Oregon, from 1990 to 2023. He has been voted a US News and World Report best US doctor, best ID doctor in Portland Magazine multiple times, has multiple teaching awards and, most importantly,  the ‘Attending Most Likely To Tell It Like It Is’ by the medical residents at his hospital. His multi-media empire can be found at edgydoc.com.

Posted by Mark Crislip

Mark Crislip, MD has been a practicing Infectious Disease specialist in Portland, Oregon, from 1990 to 2023. He has been voted a US News and World Report best US doctor, best ID doctor in Portland Magazine multiple times, has multiple teaching awards and, most importantly,  the ‘Attending Most Likely To Tell It Like It Is’ by the medical residents at his hospital. His multi-media empire can be found at edgydoc.com.