Shares

Humans have the very odd ability to hold contradictory, even mutually exclusive, ideas in their brains at the same time. There are two basic processes at work to make this possible. The first is compartmentalization – the ideas are simply kept separate. They are trains on different tracks that never cross. We can switch from to the other, but they never crash into each other.

When contradictory ideas do come into conflict this causes what psychologists call “cognitive dissonance.” We then typically will relieve cognitive dissonance, which is an unpleasant state, through the second process – rationalization. We happily make up reasons why the two conflicting ideas actually don’t conflict at all. People are generally good at rationalization. It is a supreme intellectual irony that greater intelligence often leads to a greater ability to rationalize with both complexity and subtlety, and therefore a greater capacity to maintain contradictory beliefs.

In fact the demarcation between science and pseudoscience is often determined by the difference between sound scientific reasoning and sophisticated rationalization.

While cognitive dissonance refers to a process that takes place within a single mind, it is a good metaphor for the contradictory impulses of groups of people, like cultures or institutions. I could not help but to invoke this metaphor when reading two editorials published in the same day in the New York Times.

The first article is by Gina Kolata (I wonder if her parents really liked tropical drinks), and is titled: Searching for Clarity: A Primer on Medical Studies. This is an excellent article – I would have been happy to have such an article be published in SBM, in fact.

Kolata begins with the story of beta carotene and cancer; of how the basic science, the animal studies, and the population-based data all looked as if beta carotene would significantly decrease cancer risk. Then came the large, randomized, placebo-controlled clinical trials that showed not only does beta carotene not protect against cancer it may increase risk a little. She uses this as a jumping off point to talk about the different types of clinical evidence, and why it is important to control for variables.

The focus of her article gets it just right – all the fuss is about trying to figure out what actually works, as reliably as possible. Basic science tells us about possible mechanisms of action. Animal studies give us confidence that a treatment is safe and promising enough to invest in human research. Population-based studies can be helpful, but they are preliminary because it is impossible to foresee all possible confounding factors. For example, perhaps people who take beta carotene supplements, or eat more fruits and vegetables, take care of themselves better in general.

RCTs (randomized controlled trials) are the gold standard because they keep all variables beetween compared groups the same except for the one variable – the treatment – that is being studied.

I was pleasantly surprised when Kolata then went a step beyond just laying out the advantages of RCTs. For the first time in a mainstream outlet that I have personally seen, she relates the importance of prior probability. She even talks about Bayes Theorem – analyzing a claim based upon prior probability and the new data. This is specifically what we advocate as science-based medicine and distinguishes SBM from evidenced-based medicine (EBM), which does not consider prior probability.

In this section she primarily quotes Dr. Steven Goodman from Johns Hopkins:

But if one clinical trial tests something that is plausible, with a lot of supporting evidence to back it up, and another tests something implausible, the trial testing a plausible hypothesis is more credible even if the two studies are similar in size, design and results. The guiding principle, Dr. Goodman says, is that “things that have a good reason to be true and that have good supporting evidence are likely to be true.”

It gets better. Dr. Goodman often uses studies purporting to show the efficacy of prayer to demonstrate the need to consider prior probability. She relates:

The reason for the skepticism, Dr. Goodman says, is not that the students are enemies of religion. It is that there is no plausible scientific explanation of why prayer should have that effect. When no such explanation or evidence exists, the bar is higher. It takes more clinical trial evidence to make a result credible.

That is exactly correct – prior probability should be used to know where to set the bar for evidence.  To ignore that is to pretend, at the beginning of each study, that we know absolutely nothing (which is very different from understanding that we don’t know everything).

Quoting the former director of the National Cancer Institute, she concludes:

“The major message,” Dr. Klausner said, “is that no matter how compelling and exciting a hypothesis is, we don’t know whether it works without clinical trials.”

As far as this article went, I have nothing but praise.  But while reading the article I could not help feeling that there was an enormous elephant in the room – and Kolata completely ignored the elephant. I don’t know if she was aware of the elephant or not. In other words, was the elephant in a different room or compartment of her mind, or was she rationalizing away the elephant. I suspect it was compartmentalized away.

That elephant is complementary and alternative medicine (CAM). Imagine applying Dr. Goodman’s and Dr. Klausner’s philosophy to anything considered CAM. I find it very intriguing that Kolata did not consider it pertinent to even bring it up.

What this drives home is a point I have tried to make for years – CAM is about creating a separate world with its own rules, rules designed to be easy; to give CAM modalities a free pass, because they cannot pass the rigorous rules of science. Kolata wrote her article in the world of mainstream scientific medicine. She described, without apology or even the need for excessive justification, what constitutes good standards of science in medicine. And yet it is completely at odds with the philosophy espoused by CAM proponents – who want to function in a separate specially-created compartment where there is, in effect, no standard.

Almost as if to make my point for me, the very same day William Broad published a separate article in the New York Time titled: Applying Science to Alternative Medicine. Despite the title, the article is actually an apology for CAM’s lack of science, and a repetition of the tired promises for better science to come. He writes:

But while sweeping claims are made for these treatments, the scientific evidence for them often lags far behind: studies and clinical trials, when they exist at all, can be shoddy in design and too small to yield reliable insights.

This is not an unreasonable statement, but I noted that the “lag behind” comment suggests that the problem is with the evidence, not the treatments. The evidence is “lagging” but hopefully (with enough support) will catch up to the treatments. It subtly presupposes that CAM treatments work. Perhaps the evidence is not lagging, but correctly showing us that many of these treatment are worthless, despite the anecdotal evidence, which is misleading.

After correctly describing that the majority of studies used to promote CAM are small or poorly designed, Broad writes:

Critics of alternative medicine have seized on that weakness. R. Barker Bausell, a senior research methodologist at the University of Maryland and the author of “Snake Oil Science” (Oxford, 2007), says small studies often have a built-in conflict of interest: they need to show positive results to win grants for larger investigations.

That’s a nice quote from Bausell, but notice that he characterizes CAM critics as seizing on the weakness of CAM clinical trials. Rather, scientists are critical of CAM because of these weaknesses.

But the primary focus of the article was to claim that the National Center for Complementary and Alternative Medicine (NCCAM) is raising the standard of scientific evidence within CAM. To make this point Broad quotes a few proponents and cherry picks a few examples, without really examining the bigger issues of quality or the net effect of the NCCAM on the quality of science within CAM. Forget any notion of prior probability.

He notes, for example, that there is an ongoing study of gingko biloba in Alzheimer’s disease. I agree that the best, and perhaps only worthwhile, studies to come out of the NCCAM relate to herbs. This is because herbs are plausible treatments – they are pharmacological agents. In my opinion, this just highlights the confusion generated by the very notion of CAM – what is it, exactly.

He then relates a 2004 study of acupuncture for knee osteoarthritis, claiming it as a success for a CAM modality.  We can quibble about the study itself – the effect sizes were actually quite small and the 25% drop out rate in the acupuncture and sham acupuncture groups could wipe out the statistical significance.

More telling, however, is that even though he quotes Dr. Berman who conducted the study for the article, neither he nor Dr. Berman mention the later meta-analysis in which Dr. Berman and the other study authors concluded:

Sham-controlled trials show clinically irrelevant short-term benefits of acupuncture for treating knee osteoarthritis. Waiting list-controlled trials suggest clinically relevant benefits, some of which may be due to placebo or expectation effects.

So maybe acupuncture does not have any specific effects for knee osteoarthritis after all. Perhaps it is just another beta carotine episode where promising early research does not pan out. But the reader never hears about this from Broad, nor any discussion of the huge plausibility problem with acupuncture and most of CAM.

Broad ends with this:

“In tight funding times, that’s going to get worse,” said Dr. Khalsa of Harvard, who is doing a clinical trial on whether yoga can fight insomnia. “It’s a big problem. These grants are still very hard to get and the emphasis is still on conventional medicine, on the magic pill or procedure that’s going to take away all these diseases.”

In the end he claims the problem is an unjustified focus on “the magic pill or procedure.” That’s quite a straw man. Perhaps the grants are hard to come by because there are researchers like Dr. Goodman who understand the proper role of plausibility and prior probability in deciding where limited research dollars should be spent.

I have to wonder if either Kolata or Broad knew of the other’s article prior to publication, or what they would think of the other’s article. But more importantly, what about the health editors who published both articles. Were they compartmentalizing or rationalizing?

Or is it just accepted these days that we live with two worlds simultaneously – the world of mainstream medicine where the rules of rigorous science and logic apply, and the alternative world of CAM where the rules are whatever you need them to be?

Shares

Author

  • Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.