Jun 01 2011
The course of research into so-called alternative medicine (CAM) over the last 20 years has largely followed the same pattern. There was little research into many of the popular CAM modalities, but proponents supported them anyway. We don’t need science, they argued, because we have anecdotes, history, and intuition.
When media attention, which drove public attention, was increasingly paid to CAM then serious scientific research increased. A specific manifestation of this was the National Center for Complementary and Alternative Medicine (NCCAM). CAM proponents then argued that their modalities were legitimate because they were being studied (as if that’s enough). Just you wait until all the positive evidence comes rolling in showing how right we were all along.
But then the evidence started coming in negative. A review of the research funded by NCCAM, for example, found that 10 years and 2.5 billion dollars of research had found no proof for any CAM modality. They must be doing something wrong, Senator Harkin (the NCCAM’s major backer) complained. They engaged in a bit of the kettle defense – they argue that the evidence is positive (by cherry picking, usually preliminary evidence), but when it is pointed out to them that evidence is actually negative they argue that the studies were not done fairly. But then when they are allowed to have studies done their way, but still well-controlled, and they are still negative, they argue that “Western science cannot test my CAM modalities.”
But at the same time, they cannot shake the need for scientific evidence to continue to push their modalities into the mainstream. The “your science can’t test my woo” defense only goes so far. So they have put themselves into a pickle – they demanded funding for CAM research, but now have to deal with the fact that the research is largely negative. CAM proponents are mostly not interested in finding what really works, and abandoning what does not work (I have an open challenge to anyone who can point to a CAM modality that was largely abandoned, rejected, or condemned by CAM proponents because of evidence of lack of efficacy). They are interested in using science to support and promote what they already believe works (a cardinal feature of pseudoscience).
And so they have entered the next phase of CAM research – the post-RCT (randomized controlled trial) phase. They have discovered the “pragmatic” study. You have to give it to them for their cleverness. A pragmatic study is meant to be a comparison of treatment options in real-world conditions. It studies treatments that already have proven efficacy from RCTs to see how they work, and how they compare, when applied in the less controlled environment of real-world practice. Pragmatic studies are useful for addressing the weaknesses of RCTs, mainly their somewhat contrived conditions (having strict inclusion criteria, for example).
But pragmatic studies are not efficacy trials themselves. They cannot be used to determine if a treatment works, because they do not control for variables and they are not blinded, so they are susceptible to placebo and non-specific effects. It is an abuse of the pragmatic study design to test a treatment that is not proven or to make efficacy claims based upon them. That has not stopped CAM proponents from doing just that.
In the news recently is just the latest example. Let me tell you the results of the study before I tell you what the study is of – just look at the data (I replaced the name of the treatment in the tables with just the word “treatment”):
This is what you need to know about the trial. There were two groups, treatment and control. Subjects were those with frequent doctor visits for symptoms that have not been diagnosed. While they were randomized, they were completely unblinded – everyone involved with the subjects knew who was getting treated and who wasn’t. At 26 weeks the control group was crossed over to receive treatment – so the difference up to 26 weeks is most important.
On two of the measures, quality of life and general practitioner consultation rates, there was no difference. In the other two measures there was a slight improvement in the treatment group, barely statistically significant. If this were an efficacy trial, this data would be unconvincing. What can fairly be concluded from this trial is that the treatment has no to minimal effect, and the tiny and inconsistent effect seen cannot be separated from placebo or non-specific effects. Further, it shows how anemic even placebo effects are for this treatment in this patient population. As a pragmatic trial, it’s essentially negative. As an efficacy trial, it’s worthless.
The study author’s, however, concluded that their treatment was effective and recommended it be incorporated into general practice.
The treatment in question is five-element acupuncture – acupuncture designed to balance the five elements of fire, water, metal, earth, and wood. This is the equivalent of balancing the four humors – in other words, it is pre-scientific superstitious nonsense. The study authors are all proponents. What is most amazing is how they got their paper, complete with unsupported conclusions, past peer-review.
The study authors are using the “part of this nutritious breakfast” con. Even as a child I understood that the sugary cereal, or whatever the commercial was trying to sell to me, was only “part of this nutritious breakfast” because the rest of the breakfast was nutritious all by itself. A doughnut would be “part of this nutritious breakfast.” So the authors of this study include five-point acupuncture with an hour of kind attention from a practitioner along with lifestyle advice and encouragement. They admit that they cannot separate out the variables here. Then they conclude that acupuncture is “part of this healthy regimen.” There is every reason to believe that it is an irrelevant part – as irrelevant as the doughnut is to the nutrition of a complete breakfast.
The pragmatic study bait and switch is here – it is now a firm part of the CAM strategy for promoting implausible therapies that don’t work. Editors and peer-reviewers need to be more aware of this scam so they don’t fall for it and inadvertently promote it, as they did in this case. Further, the results of this trial are not impressive even at face value – it’s basically negative, so it’s a double swindle.
All of this has not stopped the headlines from declaring that “acupuncture works” – which appears to be the only goal of such research.
35 Responses to “Pragmatic Studies – More Bait and Switch”