In the three prior posts of this series I tried to analyze some of the defects in the randomized clinical rials (RCTs) of homeopathic remedies for childhood diarrhea. The first entry showed that the first two RCTs’ (done in Nicaragua) methods could not produce a meaningful result because of the way the RCTs were set up (methods.) The second entry showed that the results obtained in the first two trials were meaningless clinically even if assumed to have resulted from more legitimate methods. The same applied to the third trial in Nepal, analyzed in the third entry.
This entry will suggest that the authors’ fourth paper (Jacobs J, Jonas WB, Jimenez-Perez M, Crothers D. Homeopathy for childhood diarrhea: combined results and metaanalysis from three randomized, controlled clinical trials. Pediat Inf Dis J, 2005;22:229-234.)- a meta-analysis (MA) of the data from the three RCTs resulted in conclusions equally as meaningless as those of the three trials.
The MA authors – several of the same workers from the three RCTs – begin by agreeing that the data from the RCTs, taken individually, were of borderline significance:
In our previous three studies, we evaluated the use of individualized homeopathic treatment of childhood diarrhea … The results of the two larger studies (n = 81, n = 116) were just at or near level of statistical significance. Because all three studies followed the same basic study design , […] we analyzed the combined data from these three studies to obtain greater statistical power. In addition we conducted a meta-analysis of effect-size difference […] to look for consistency of effects.
MAs and systematic reviews (SRs) are the two consensus methods for summarizing data from multiple individual studies. The inclusion and search methods of RCTs for SRs and MAs are similar, but the objectives of the two are a bit different, as are the forms of the reports. In SRs, the results are summarized in more in narrative form, whereas in MAs the data are treated mathematically and the results are defined in statistical terms. Thus authors of SRs are freer to speculate on the degree of confidence that a method is effective based on what is shown by the numbers of positive and negative RCTs collected. Authors of MAs usually limit their comments to what the mathematical formulation of the summarized data show.
An audience member at a recent NYC Skeptics meeting asked me how I handled conflict surrounding strongly held beliefs that are not supported by conclusive evidence. As a dentist, he argued, he often witnessed professionals touting procedure A over procedure B as the “best way” to do X, when in reality there are no controlled clinical trials comparing A and B. “How am I to know what’s right in these circumstances?” He asked.
And this is more-or-less what I said:
The truth is, you probably can’t know which procedure is better. At least, not at this point in history. The beauty of science is that it’s evolving. We are constantly learning more about our bodies and our environment, so that we are getting an ever-clearer degree of resolution on what we see and experience.
It’s like having a blurry camera lens at a farm. At first we can only perceive that there are living things moving around on the other side of the lens – but as we begin to focus the camera, we begin to make out that the animals are in the horse or cattle family. With further focus we might be able to differentiate a horse from a cow… and eventually we’ll be able to tell if the horse has a saddle on it, and maybe one day we’ll be able to see what brand of saddle it is. Each scientific conundrum that we approach is often quite blurry at the onset. People get very invested in their theories of the presence or absence of cows, and whether or not the moving objects could in fact be horses. Others say that those looking through the camera contradict one another too much to be trusted – that they must be offering false ideas or willfully misleading people about the picture they’re describing.
In fact, we just have different degrees of clarity on issues at any given point in time. This is not cause for alarm, nor is it a reason to abandon our cameras. No, it just gives us more reason to continue to review, analyze, and revise our understanding of the picture at hand. We should try not to make more out of photo than we can at a given resolution – and understand that contradicting opinions are more likely to be evidence of insufficient information than a fundamental flaw of the scientific method.
Last week I discussed a clinical trial comparing standardized acupuncture, individualized acupuncture, placebo-acupuncture, and usual care. In that discussion I emphasized the comparison between the three acupuncture groups, which did not show any difference in outcome. These results are consistent with the overall acupuncture literature, which shows in the better controlled trials that it does not matter where you stick the needles or even if you stick them through the skin. Therefore the scientific evidence fails to reject the null hypothesis (that acupuncture does not work). This did not stop the press from declaring, almost uniformly, that acupuncture works for back pain, contributing to the public misunderstanding of clinical science.
This week I am going to focus on the other aspect of the trial – the one the researchers and the press chose to focus on – the comparison of the two real and one placebo acupuncture arms to “usual care.” This too was misrepresented by the press, encouraged by the overinterpretation of the evidence by the researchers.
In the comments to Part I of this discussion David Gorski correctly pointed out that the study in fact did not even constitute a comparison of acupuncture to standard medical treatment. He is absolutely correct, and the many reasons for this are worth explaining in detail. Understanding the technology of clinical trials is central to science-based medicine, including all of their pitfalls and limitations. For practical and logistical reasons there is almost never a perfect clinical trial, but mischief only ensues when limitations are not understood, leading to a misinterpretation (and almost always an overinterpretation in the direction of the researcher’s bias) of the evidence.
I recently saw a 14 year old girl in my office with a 2 day history of severe abdominal cramps, bloody diarrhea, and fever. Her mother had similar symptoms as did several other members of her household and some family friends. After considerable discomfort, everyone recovered within a few days. The child’s stool culture grew a bacterium called Campylobacter.
Campylobacter is a nasty little pathogen which causes illness like that seen in my patient, but can also cause more severe disease. It is found commonly in both wild and domestic animals. But where did all these friends and family members get their campylobacter infections? Why, from their friendly farmer, of course!
My patient’s family and friends had taken a weekend pilgrimage to a family-run farm in Buck’s County, Pennsylvania. They saw farm animals and a working farm. And they all drank raw milk. Why raw milk? Because, as they were told and led to believe, raw milk is better. Better tasting and better for you.
In 1862, the french chemist Louis Pasteur discovered that heating wine to just below its boiling point could prevent spoilage. Now this process (known as pasteurization) is used to reduce the number of dangerous infectious organisms in many products, prolonging shelf life and preventing serious illness and death. But a growing trend toward more natural foods and eating habits has led to an interest in unpasteurized foods such as milk and cheese. In addition to superior taste, many claim that raw milk products provide health benefits not found in the adulterated versions. Claims made about the “good bacteria” (like Lactobacillus) conquering the “bad” bacteria (like Campylobacter, Salmonella, and E. coli) in raw milk are pure fantasy. Some even claim that the drinking of mass-produced, pasteurized milk has resulted in an increase in allergies, heart disease, cancer, and a variety of other diseases. Again, this lacks any scientific crediblity.
Alcoholics Anonymous is the most widely used treatment for alcoholism. It is mandated by the courts, accepted by mainstream medicine, and required by insurance companies. AA is generally assumed to be the most effective treatment for alcoholism, or at least “an” effective treatment. That assumption is wrong.
We hear about a few success stories, but not about the many failures. AA’s own statistics show that after 6 months, 93% of new attendees have left the program. The research on AA is handily summarized in a Wikipedia article. A recent Cochrane systematic review found no evidence that AA or other 12 step programs are effective.
In The Skeptic’s Dictionary, Bob Carroll comments:
Neither A.A. nor many other SATs [Substance Abuse Treatments] are based on science, nor do they seem interested in doing any scientific studies which might test whether the treatment they give is effective. (more…)
The previous post of this series analyzed the results of the 1994 Pediatrics paper purporting to show a statistically significant effect of homeopathic preparations on acute childhood diarrhea in a population in Nicaragua. That clinical trial followed a pilot study that also had shown a small but statistically significant effect of homeopathic remedies.
A moment here for explanation as to why I am going through these old studies. Reports like the four or five in this series made headlines. They are also so well cloaked in manipulated data and overdrawn conclusions that press and even academicians accept their conclusions – and even overdraw more. This is still going on.
Over the past thirty years some of us informally and gradually developed semi-systematic ways of analyzing these increasingly scientific-appearing claims of sectarians (sCAMmers.) Errors, inconsistencies and falsifications we recognize now were not so obvious decades ago. SCAMmers developed imaginatively new methods as their fields progressed. We in the science-based or knowledge based medicine field have been trailing along, detecting their tricks and twists as they developed, and like street sweepers behind horses, picking up their excrement (metaphor to force attention.) Yesterday’s lucid post on the latest acupuncture study by Steve Novella exemplifies this expertise (no offense intended.)
On May 9th I had the pleasure of lecturing to an audience of critical thinkers at the NYC Skeptics meeting. The topic of discussion was pseudoscience on the Internet – and I spent about 50 minutes talking about all the misleading health information and websites available to (and frequented by) patients. The common denominator for most of these well-intentioned but misguided efforts is a fundamental lack of understanding of the scientific method, and the myriad ways that humans can fool ourselves into perceiving a cause and effect relationship between unrelated phenomena.
But most importantly, we had the chance to touch upon a theme that has been troubling me greatly over the past couple of years: the rise in influence of those untrained in science on matters of medicine. I have been astonished by the ability of “thought leaders” like Jenny McCarthy to gain a broad platform of influence (i.e. Oprah Winfrey’s TV network) despite her obviously flawed beliefs about the pathophysiology of autism. Why is it so hard to find a medical voice of reason in mainstream media?
The answer is probably related to two issues: first, good science makes bad television, and second, physicians are going about PR and communications in the wrong way. We are taught to put emotions aside as we carefully weigh evidence to get to the bottom of things. But we are not taught to reinfuse the subject with emotion once we’ve come to an impartial consensus. Instead, we tend to bicker about statistical analyses, and alienate John Q. Public with what appears to him as academic minutiae and hair-splitting.
I’m not sure what we can or should offer in place of our “business as usual” behavior – but I’ve noticed that being right isn’t the same as being influential. I wonder how we can better advance the cause of science (for the sake of public health at a minimum) to an audience drawn more to passion than to substance?
I would really enjoy your input, dear readers of Science Based Medicine, because I’m at a loss as to what we should do next to reach people in our current culture, and with new communications platforms. What would you recommend?
A new study which randomized 638 adults to either standard acupuncture, individualized acupuncture, placebo acupuncture using tooth picks that did not penetrate the skin, and standard therapy found exactly what previous evidence has also suggested – it does not seem to matter where you stick the needles or even if you stick the needles through the skin. The only reasonable scientific conclusion to draw from this is that acupuncture does not work.
But let me back up a minute. Imagine if we were evaluating the efficacy of a new pain drug. This drug, when tested in open trials (no blinding or control) has an effect on reducing pain – it is superior to no treatment. When compared to a placebo, however, the drug is no more effective than the placebo, although both are more effective than no treatment.
Now imagine that the pharmaceutical company who manufactures this drug sends out a press release declaring that their drug is effective for pain, but that their research shows that a placebo of their drug is also effective (FDA applications are pending). Therefore more research is needed to determine how their drug works. Would you buy it?
That is the exact situation we are facing with acupuncture research.
There is no question that patients on insulin benefit from home monitoring. They need to adjust their insulin dose based on their blood glucose readings to avoid ketoacidosis or insulin shock. But what about patients with non-insulin dependent diabetes, those who are being treated with diet and lifestyle changes or oral medication? Do they benefit from home monitoring? Does it improve their blood glucose levels? Does it make them feel more in control of their disease?
This has been an area of considerable controversy. Various studies have given conflicting results. Those studies have been criticized for various flaws: some were retrospective, non-randomized, not designed to rule out confounding factors, high drop-out rate, subjects already had well-controlled diabetes, etc. A systematic review showed no benefit from monitoring. So a new prospective, randomized, controlled, community based study was designed to help resolve the conflict. (more…)
My first post on this blog addressed the problem of what I have called “fake diseases” (a problem which needs a more neutral moniker). As I wrote at the time, people suffering from vague ailments are often twice victimized: the medical establishment cannot satisfy them, and quacks prey on them. There’s a certain sense of satisfaction and validation to having your symptoms clearly labeled. While it isn’t a good thing to have heart disease, no one tells you you’re not sick. Not so with people with more vague and protean symptoms. It’s human nature to want answers, to try to understand patterns, and when we, as physicians, cannot help someone understand their symptoms, they’re going to reach out to others for answers.
The Lyme disease community is like that. The internet has helped them to form communities and to share information. This whole idea of “chronic Lyme disease” (CLD) has become a way for people who don’t feel they have a medical home to come together. I understand that impulse. Any human being should be able to understand it.
But the other side of me, the analytic side, has a problem with it. No, not a problem with people supporting each other, but if you read these websites, message boards, etc., you can see a certain commonality—people aren’t getting any better. They are still suffering. Much of that suffering is blamed on a heartless medical community, and when they find a “Lyme literate” doctor, there is a huge sense of relief. But the symptoms often continue.
The very idea of CLD is not implausible (as opposed to Morgellons and other such fake diseases). Other spirochetes give us models for diseases with extended, multi-system effects, syphilis being the most studied. One of the key concepts in science-based medicine is plausibility, because, as Dr. Harriet Hall puts it, no matter how much you study the characteristics of the tooth fairly, you still haven’t proven her existence. But CLD certainly has a plausibility to it, and if an idea is plausible, then it is certainly worth studying and gathering evidence.