Articles

Cognitive Dissonance at the New York Times

Humans have the very odd ability to hold contradictory, even mutually exclusive, ideas in their brains at the same time. There are two basic processes at work to make this possible. The first is compartmentalization – the ideas are simply kept separate. They are trains on different tracks that never cross. We can switch from to the other, but they never crash into each other.

When contradictory ideas do come into conflict this causes what psychologists call “cognitive dissonance.” We then typically will relieve cognitive dissonance, which is an unpleasant state, through the second process – rationalization. We happily make up reasons why the two conflicting ideas actually don’t conflict at all. People are generally good at rationalization. It is a supreme intellectual irony that greater intelligence often leads to a greater ability to rationalize with both complexity and subtlety, and therefore a greater capacity to maintain contradictory beliefs.

In fact the demarcation between science and pseudoscience is often determined by the difference between sound scientific reasoning and sophisticated rationalization.

While cognitive dissonance refers to a process that takes place within a single mind, it is a good metaphor for the contradictory impulses of groups of people, like cultures or institutions. I could not help but to invoke this metaphor when reading two editorials published in the same day in the New York Times.

The first article is by Gina Kolata (I wonder if her parents really liked tropical drinks), and is titled: Searching for Clarity: A Primer on Medical Studies. This is an excellent article – I would have been happy to have such an article be published in SBM, in fact.

Kolata begins with the story of beta carotene and cancer; of how the basic science, the animal studies, and the population-based data all looked as if beta carotene would significantly decrease cancer risk. Then came the large, randomized, placebo-controlled clinical trials that showed not only does beta carotene not protect against cancer it may increase risk a little. She uses this as a jumping off point to talk about the different types of clinical evidence, and why it is important to control for variables.

The focus of her article gets it just right – all the fuss is about trying to figure out what actually works, as reliably as possible. Basic science tells us about possible mechanisms of action. Animal studies give us confidence that a treatment is safe and promising enough to invest in human research. Population-based studies can be helpful, but they are preliminary because it is impossible to foresee all possible confounding factors. For example, perhaps people who take beta carotene supplements, or eat more fruits and vegetables, take care of themselves better in general.

RCTs (randomized controlled trials) are the gold standard because they keep all variables beetween compared groups the same except for the one variable – the treatment – that is being studied.

I was pleasantly surprised when Kolata then went a step beyond just laying out the advantages of RCTs. For the first time in a mainstream outlet that I have personally seen, she relates the importance of prior probability. She even talks about Bayes Theorem – analyzing a claim based upon prior probability and the new data. This is specifically what we advocate as science-based medicine and distinguishes SBM from evidenced-based medicine (EBM), which does not consider prior probability.

In this section she primarily quotes Dr. Steven Goodman from Johns Hopkins:

But if one clinical trial tests something that is plausible, with a lot of supporting evidence to back it up, and another tests something implausible, the trial testing a plausible hypothesis is more credible even if the two studies are similar in size, design and results. The guiding principle, Dr. Goodman says, is that “things that have a good reason to be true and that have good supporting evidence are likely to be true.”

It gets better. Dr. Goodman often uses studies purporting to show the efficacy of prayer to demonstrate the need to consider prior probability. She relates:

The reason for the skepticism, Dr. Goodman says, is not that the students are enemies of religion. It is that there is no plausible scientific explanation of why prayer should have that effect. When no such explanation or evidence exists, the bar is higher. It takes more clinical trial evidence to make a result credible.

That is exactly correct – prior probability should be used to know where to set the bar for evidence.  To ignore that is to pretend, at the beginning of each study, that we know absolutely nothing (which is very different from understanding that we don’t know everything).

Quoting the former director of the National Cancer Institute, she concludes:

“The major message,” Dr. Klausner said, “is that no matter how compelling and exciting a hypothesis is, we don’t know whether it works without clinical trials.”

As far as this article went, I have nothing but praise.  But while reading the article I could not help feeling that there was an enormous elephant in the room – and Kolata completely ignored the elephant. I don’t know if she was aware of the elephant or not. In other words, was the elephant in a different room or compartment of her mind, or was she rationalizing away the elephant. I suspect it was compartmentalized away.

That elephant is complementary and alternative medicine (CAM). Imagine applying Dr. Goodman’s and Dr. Klausner’s philosophy to anything considered CAM. I find it very intriguing that Kolata did not consider it pertinent to even bring it up.

What this drives home is a point I have tried to make for years – CAM is about creating a separate world with its own rules, rules designed to be easy; to give CAM modalities a free pass, because they cannot pass the rigorous rules of science. Kolata wrote her article in the world of mainstream scientific medicine. She described, without apology or even the need for excessive justification, what constitutes good standards of science in medicine. And yet it is completely at odds with the philosophy espoused by CAM proponents – who want to function in a separate specially-created compartment where there is, in effect, no standard.

Almost as if to make my point for me, the very same day William Broad published a separate article in the New York Time titled: Applying Science to Alternative Medicine. Despite the title, the article is actually an apology for CAM’s lack of science, and a repetition of the tired promises for better science to come. He writes:

But while sweeping claims are made for these treatments, the scientific evidence for them often lags far behind: studies and clinical trials, when they exist at all, can be shoddy in design and too small to yield reliable insights.

This is not an unreasonable statement, but I noted that the “lag behind” comment suggests that the problem is with the evidence, not the treatments. The evidence is “lagging” but hopefully (with enough support) will catch up to the treatments. It subtly presupposes that CAM treatments work. Perhaps the evidence is not lagging, but correctly showing us that many of these treatment are worthless, despite the anecdotal evidence, which is misleading.

After correctly describing that the majority of studies used to promote CAM are small or poorly designed, Broad writes:

Critics of alternative medicine have seized on that weakness. R. Barker Bausell, a senior research methodologist at the University of Maryland and the author of “Snake Oil Science” (Oxford, 2007), says small studies often have a built-in conflict of interest: they need to show positive results to win grants for larger investigations.

That’s a nice quote from Bausell, but notice that he characterizes CAM critics as seizing on the weakness of CAM clinical trials. Rather, scientists are critical of CAM because of these weaknesses.

But the primary focus of the article was to claim that the National Center for Complementary and Alternative Medicine (NCCAM) is raising the standard of scientific evidence within CAM. To make this point Broad quotes a few proponents and cherry picks a few examples, without really examining the bigger issues of quality or the net effect of the NCCAM on the quality of science within CAM. Forget any notion of prior probability.

He notes, for example, that there is an ongoing study of gingko biloba in Alzheimer’s disease. I agree that the best, and perhaps only worthwhile, studies to come out of the NCCAM relate to herbs. This is because herbs are plausible treatments – they are pharmacological agents. In my opinion, this just highlights the confusion generated by the very notion of CAM – what is it, exactly.

He then relates a 2004 study of acupuncture for knee osteoarthritis, claiming it as a success for a CAM modality.  We can quibble about the study itself – the effect sizes were actually quite small and the 25% drop out rate in the acupuncture and sham acupuncture groups could wipe out the statistical significance.

More telling, however, is that even though he quotes Dr. Berman who conducted the study for the article, neither he nor Dr. Berman mention the later meta-analysis in which Dr. Berman and the other study authors concluded:

Sham-controlled trials show clinically irrelevant short-term benefits of acupuncture for treating knee osteoarthritis. Waiting list-controlled trials suggest clinically relevant benefits, some of which may be due to placebo or expectation effects.

So maybe acupuncture does not have any specific effects for knee osteoarthritis after all. Perhaps it is just another beta carotine episode where promising early research does not pan out. But the reader never hears about this from Broad, nor any discussion of the huge plausibility problem with acupuncture and most of CAM.

Broad ends with this:

“In tight funding times, that’s going to get worse,” said Dr. Khalsa of Harvard, who is doing a clinical trial on whether yoga can fight insomnia. “It’s a big problem. These grants are still very hard to get and the emphasis is still on conventional medicine, on the magic pill or procedure that’s going to take away all these diseases.”

In the end he claims the problem is an unjustified focus on “the magic pill or procedure.” That’s quite a straw man. Perhaps the grants are hard to come by because there are researchers like Dr. Goodman who understand the proper role of plausibility and prior probability in deciding where limited research dollars should be spent.

I have to wonder if either Kolata or Broad knew of the other’s article prior to publication, or what they would think of the other’s article. But more importantly, what about the health editors who published both articles. Were they compartmentalizing or rationalizing?

Or is it just accepted these days that we live with two worlds simultaneously – the world of mainstream medicine where the rules of rigorous science and logic apply, and the alternative world of CAM where the rules are whatever you need them to be?

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (10) ↓

10 thoughts on “Cognitive Dissonance at the New York Times

  1. jonny_eh says:

    Why did Gina Kolata not mention CAM? Maybe she didn’t even think of it, since she’s never encountered it in her field (unlikely). Or more likely, she didn’t need to mention it, it should be obvious to anyone reading the article that it applies. Maybe she thought that by mentioning CAM she’d be dismissed outright by the people she was trying to reach with the article. Maybe she has a followup article about CAM.

    BTW, Dr. Novella, when are you going to get an article published in the NYT?

  2. revmatty says:

    Or maybe she didn’t want to mention it because CAM proponents would then try to claim that her article validated their modalities.

  3. clgood says:

    She didn’t mention unbalanced humors nor bloodletting, either. My own opinion is that she kept the article tightly focused. But it could also be said that she’s done an excellent job at laying the groundwork in the mind of the reader to apply the same to CAM when he encounters it, but without doing anything to provoke the CAMmies.

    That the NYT suffers from cognitive dissonance has been clear for years. Maybe it’s just now invading the science section.

    Excellent post. And thanks for linking to Kolata’s article. I, too, wonder if her folks like getting caught in the rain.

  4. To clarify – I did not actually criticize Kolata for the focus of her article, but rather was pointing out how CAM has created this double standard. There are now two worlds – regular science, and alternative.

    I do think, however, that those of us who promote scientific medicine cannot simply ignore the this other world of CAM.

    My goal was to bring these two worlds smashing together, so at least to break down this compartmentalization. We can then deal with the cognitive dissonance – hopefully by doing the one thing people seem to avoid at all costs, changing minds.

  5. daijiyobu says:

    Well, on the positive side, the Kitzmiller case was a great example of such a collision, leading to accountability and clarity — in the legal arena.

    What has been sickening me for years is that in the academic arena — and I’ve experienced this firsthand (see http://theeducationrobbers.blogspot.com/ ) — CAM has not been held accountable its for blatant misrepresentations.

    -r.c.

  6. One of the several reasons that we decided to investigate the Trial to Assess Chelation Therapy (TACT) is that it is, by far, the largest (2000 subjects) and most expensive ($30 million) “CAM” study yet funded by the NIH. The trial is unacceptable in every way: politically motivated, pushed through without competent scientific review, unscientific, unethical, dangerous, and a waste of taxpayers’ money. By itself it is a compelling refutation of well-meaning but naive support for the NCCAM (by Broad and many others), and against disingenuous support from “CAM” insiders who either do or ought to know better.

    The latter group is represented by two opinion pieces in Science Magazine, 2006, the first by the late NCCAM director Stephen Straus and NCCAM functionary Margaret Chesney, [1] the second by a group of “CAM” proponents at conspicuous medical schools.[2]

    Those two pieces were written in response to an entirely reasonable (too polite, if anything) critique of the NCCAM by Don Marcus and Art Grollman, who criticized the scientific review process at the NCCAM, citing the TACT (among other misadventures) as an example. [3] The responses of Straus et al didn’t bother to address the TACT. Rather, they claimed, erroneously or misleadingly (depending on the topic), that

    “NCCAM’s peer-review process is the same as other NIH institutes, i.e., content experts review applications in their area of expertise….NCCAM’s investigator-initiated R01 grant applications are reviewed by study sections convened by the NIH Center for Scientific Review; thus, they compete on an even playing field with all other applications to NIH. All members of NIH peer-review panels and advisory councils, including those at NCCAM, adhere to NIH policies concerning conflict of interest. The NCCAM Advisory Council acts as a second level of review.” (from Straus and Chesney)

    And:

    “The processes through which proposals are submitted, reviewed, funded, and managed are all consistent with standard NIH practice.” (from Folkman et al)

    Our investigation of the TACT refutes each of those assertions. [4] Ironically, after the article was published in May, Stephen Barrett of Quackwatch wrote Gina Kolata to urge that the Times report it. She declined, as did Science magazine.

    1. http://www.ncbi.nlm.nih.gov/pubmed/16857924?ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DiscoveryPanel.Pubmed_RVAbstractPlus

    2. http://www.ncbi.nlm.nih.gov/pubmed/17110556?ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DiscoveryPanel.Pubmed_Discovery_RA&linkpos=5&log$=relatedarticles&logdbfrom=pubmed

    3. http://www.ncbi.nlm.nih.gov/pubmed/16857923?ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DiscoveryPanel.Pubmed_RVAbstractPlus

    4. http://www.ncbi.nlm.nih.gov/entrez/utils/fref.fcgi?PrId=3494&itool=AbstractPlus-nondef&uid=18596934&db=pubmed&url=http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=18596934

  7. Wallace Sampson says:

    Just info:

    Gina Kolata and William Broad are paid writers for the NYT, both employed full time (altough Ms. Kolata may be free lancing lately.) Outsiders are rarely allowed op ed voice. Before the recent politicization of NYT, several associates got published`- Bob Park and Paul Gross – both on science fraud.

    Kolata knows us medical skeptics well and has interviewed several of us. Mr. Broad to my knowledge has not (he might have – I don’t know.) But that might account for some difference in orientation. Broad is obviously naive to the problems of NCCAM, and probably has not seen most of our writings from SRAM to Academic Medicine to NEJMed.
    BTW, that prayer paper of Dr. Goodman’s seems like the Cha/Wirth one some of us analyzed (Bruce Flamm, et al) in Scientific Review, Skeptical Inquirer, etc. We no doubt had an accurate handle on that paper’s prior probability, having been on D. Wirth’s trail for years.
    Both Kolata and Broad were reporters for years at Science. Broad has written a popular book on The Oracle at Delphi and Kolata several – including on the great flu epidemic, and on Dolly the clone. I thought Broad had written on research fraud, but could not find same.

  8. Karl Withakay says:

    >>>In the end he claims the problem is an unjustified focus on “the magic pill or procedure.”

    He must have been looking in a mirror rather than through a window when he made that observation; isn’t that what CAM is all about?

Comments are closed.