Articles

“There must be a reason,” or how we support our own false beliefs

ResearchBlogging.orgFor a change of pace, I want to step back from medicine for this post, although, as you will see (I hope), the study I’m going to discuss has a great deal of relevance to the topics covered regularly on this blog. One of the most frustrating aspects of being a skeptic and championing science-based medicine is just how unyielding belief in pseudscience is. Whatever realm of science in which there is pseudoscience I wander into, I find beliefs that simply will not yield to science or reason. Whether it be creationism, quackery such as homeopathy, the anti-vaccine movement, the “9/11 Truth” movement, moon hoaxers, or any of a number of pseudoscientific movements and conspiracy theories, any skeptic who ventures into discussions of such a topic with believers will become very frustrated very fast. It takes a lot of tenacity to keep going back to the well to argue the same points over and over again and refute the same nonsense you’ve refuted over and over again. Many do not have sufficient stick-to-it-iveness, leading them to throw up their hands and withdraw from the fight.

Although some of us here have blamed this phenomenon on “cultishness” and, make no mistake, I do think that there is an element of that in many of these movement, particularly the anti-vaccine movements, cultishness alone can’t explain why people hold on so hard to beliefs that are clearly not supported by science or evidence, such as the belief that vaccines are responsible for an “autism epidemic.” Then last week, what should pop up in the newsfeeds that I regularly monitor but a rather interesting article in Science Daily entitled How We Support Our False Beliefs. It was a press release about a study1 that appeared a few months ago in Sociological Inquiry, and the the study was described thusly:

In a study published in the most recent issue of the journal Sociological Inquiry, sociologists from four major research institutions focus on one of the most curious aspects of the 2004 presidential election: the strength and resilience of the belief among many Americans that Saddam Hussein was linked to the terrorist attacks of 9/11.

Although this belief influenced the 2004 election, they claim it did not result from pro-Bush propaganda, but from an urgent need by many Americans to seek justification for a war already in progress.

The findings may illuminate reasons why some people form false beliefs about the pros and cons of health-care reform or regarding President Obama’s citizenship, for example.

Let me make one thing plain right here and right now, before I delve into the study itself. This analysis has very little to do with whether invading Iraq preemptively was a good idea or not, whether it was right or not. I am not even going to address that question, as it is not an appropriate topic for SBM. However, it does use as a model the widespread belief in from 2001 to 2004 that Iraq’s ruler Saddam Hussein had aided and abetted the terrorists who on 9/11/2001 crashed jet liners into the World Trade Towers, the Pentagon, and into the fields of Pennsylvania when the passengers fought back. When I read this study, I immediately realized that its results might be more generalizable and in particular apply to the movements to promote unscientific medical practices that we routinely encounter on SBM. The widespread belief that persists even to this day that Saddam Hussein somehow had a hand in 9/11 is very much like the beliefs of the anti-vaccine movement in its resistance to evidence is a lot like belief in quackery and other pseudoscience. As one of the investigators, Steven Hoffman put it, “”This misperception that Hussein was responsible for the Twin Tower terrorist attacks was very persistent, despite all the evidence suggesting that no link existed.”

One problem with this study is that the authors sidestep the question of whether this mistaken belief was due to propaganda or not. Although it has been argued by many that the Bush administration promoted such a connection by innuendo and misinformation, this study suggests is that there was a lot more to it than that. However, in a way, that’s not a huge problem, as this study is not so much about how false beliefs come to be widespread in the first place but rather why human beings keep holding on to them after evidence showing them not to be true is presented to them. As in the case of quackery, how belief in pseudoscience comes to be and why humans continue to cling to that belief even in the case of overwhelming contradicting evidence are two separate, but related, problems. Right in the introduction, the authors of this study lay the conflict on the line in a way that one could see as relevant to SBM as well:

Explanations for this have generally suggested that the misperception of a link resulted from a campaign of innuendo carried out by the Bush administration that explicitly and implicitly linked Saddam with Al Qaeda. For example, Gershkoff and Kushner (2005:525) argue that “the Bush administration successfully convinced [a majority of the public] that a link existed between Saddam Hussein and terrorism generally, and between Saddam Hussein and Al Qaeda specifically.” We characterize this explanation as being about the information environment: it implies that if voters had possessed the correct information, they would not have believed in the link. Underlying this explanation is a psychological model of information processing that scholars have labeled “Bayesian updating,” which envisions decision makers incrementally and rationally changing their opinions in accordance with new information (Gerber and Green 1999).

In this article we present data that contest this explanation, and we develop a social psychological explanation for the belief in the link between Saddam and Al Qaeda. We argue that the primary causal agent for misperception is not the presence or absence of correct information but a respondent’s willingness to believe particular kinds of information. Our explanation draws on a psychological model of information processing that scholars have labeled motivated reasoning. This model envisions respondents as processing and responding to information defensively, accepting and seeking out confirming information, while ignoring, discrediting the source of, or arguing against the substance of contrary information (DiMaggio 1997; Kunda 1990; Lodge and Tabor 2000).

It was with some trepidation that I decided to delve into this study because (1) I’m not a sociologist and (2) I realize that this particular comparison may ruffle some feathers, particularly any readers with conservative political leanings, particularly since other examples mentioned in the article are from the current debate about health care (“death panels,” anyone?) or the “birther” movement, namely that group of people who believe that Barack Obama is not a natural born citizen of the United States and therefore is ineligible to be President, a belief that defies all evidence and reason. However, again, one can see how this sort of a model could apply very well to belief in all manner of pseudoscience, the most prominent at the moment being the anti-vaccine movement. In fact, this model applies to all of us, and the choice of subjects was more for convenience than out of bias, as the authors take pains to point out that these sorts of flaws in reasoning are widespread among all political ideologies:

Our choice of subjects should not be taken to imply that the processes we are examining here are particular to conservatives: we expect that, had we conducted this study in the late 1990s, we would have found a high degree of motivated reasoning regarding the behavior of President Clinton during the Lewinsky scandal. Previous research on motivated reasoning has found it among respondents of all classes, ages, races, genders, and affiliations (see Lodge and Tabor 2000).

But what is this “motivated reasoning” about which the authors write and how did they look at it? What exactly did the authors do to test their hypothesis? The authors base their concept on that of cognitive dissonance. Delving back into my medical school psychology courses, I remember that cognitive dissonance derives from the observation that people do not like to be aware when they hold contradictory beliefs. Indeed, it is the name given to the feeling we have when we are made aware that we are holding two contradictory thoughts at the same time, and the strength of the dissonance depends upon the importance of the subject to an individual, how sharply the dissonant thoughts conflict, and how much the conflict can be rationalized away. Cognitive dissonance theory thus posits that, when faced with evidence or occurrences that challenge their beliefs, people will tend to minimize the dissonance any way they can without giving up those beliefs.

One classic example of cognitive dissonance often cited include smokers who try to rationalize their unhealthy habit. An even more classic example, described by Leon Festinger in the book When Prophecy Fails is the existence of “end of the world” cults, who believe that the world will end on a certain date. Festinger infiltrated a cult that believed the world was going to be destroyed and its leader and followers would all be taken away by an extraterrestrial visitor before dawn on December 21, 1954. Dawn came and went with no visitor and no world-destroying cataclysm. Did the group disintegrate? That’s what you might think it would do, but instead it concluded that the cataclysm had been called off and that God stayed his hand because the cult had “spread so much light.” Thus, the strategy for eliminating the cognitive dissonance between the cult’s beliefs and the undeniable fact that the world had not ended, nor had an alien visitor come, at the predicted time was not to conclude that the cult’s beliefs had been wrong, but rather to conclude that the cult had somehow stopped the catastrophe through its righteousness.

Now, on to the study. The authors chose a study population from precincts in counties that had voted most heavily for Bush in 2000, identifying them through voter registration records. Surveys were mailed to 1,062 voters, of which 12 were returned to sender. Of the remaining 1,050, 267 responded, for an overall adjusted response rate of 25.4 percent. Of these surveys, 21 were unusable in this study, so the analysis of the surveys is based on 246 respondents. Subjects who agreed to be interviewed (84, of which 49 met the study criteria of having voted for Bush and believed that Saddam Hussein was somehow involved in the 9/11 attacks) were then subjected to what the authors termed a “challenge interview” to determine whether they were exhibiting Bayesian updating (the willingness to change one’s mind in the face of contradictory information from a trusted source) or motivated reasoning (resisting contradictory information). The exact wording of the challenge was, “..let’s talk about Iraq. As you see in these quotes, the 9/11 Commission found that Saddam Hussein was not behind the September 11 attacks. President Bush himself said, “This administration never said that the 9/11 attacks were orchestrated between Saddam and Al Qaeda.” What do you think about that? [show newspaper clips]“

Data were analyzed as follows:

First, we examined whether our respondents deflected the information, and we categorized the strategies that they used to do so. Second, to conduct a more stringent test of the motivated reasoning hypothesis, we examined whether respondents attended to the contradictory data at all. Lupia (2002) argues that Bayesian updating happens in three stages: to successfully change opinion, a piece of information must be attended to, remembered, and used in decision making. The first stage, attention, is a prerequisite for the second and third stages. By coding whether our respondents attended to the information we produced a minimum estimate for motivated reasoning, which can also happen at the second or third stages.

What did the authors find? Basically, only 2% used Bayesian updating, changing their mind about whether in response to information provided that challenged their belief. 14% denied that they actually believed in a link at all, even though they had responded in writing in the survey that they believed there had been a strong link between Saddam Hussein and 9/11. Most of the others ignored the evidence against their belief and launched straight into arguments of why the war against Iraq was justified, thereby reducing their cognitive dissonance by asserting, in essence, that the war was a good idea even if Saddam Hussein had not been involved in 9/11. While this study had a fair number of weaknesses inherent in studies of this sort, overall it’s not bad. For example, it didn’t look at whether the survey and challenge interviews changed any minds later, after the respondents left. Nor is it clear, given that the interviews were carried out in 2004, whether this study accurately showed the origin of the erroneous beliefs about Saddam Hussein having helped al Qaeda. Rather, it probably showed more of the reasons for the persistence of such beliefs, regardless of how they formed. However, as I mentioned before, this is probably not a huge flaw in that the study was more designed to look at how people support their false beliefs, rather than how they formed those beliefs in the first place.

From their interviews, the authors postulated a mechanism the refer to as inferred justification, which the authors describe thusly:

Finally, our interviews revealed an interesting and creative reasoning style that we call inferred justification: recursively inventing the causal links necessary to justify a favored politician’s action. Inferred justification operates as a backward chain of reasoning that justifies the favored opinion by assuming the causal evidence that would support it. As with the situational heuristics described above, respondents begin with the situation and then ask themselves what must be true about the world for the situation to hold.

None of this should come as any surprise to skeptics and supporters of science-based medicine. It’s nothing more than a fancy way of describing the flaws in reasoning that virtually all boosters of unscientific medicine use: post hoc ergo propter hoc reasoning and cherry picking the small amount of evidence that supports their belief and ignoring the vast majority of the evidence that does not. Forget politics. Forget “liberal” versus “conservative.” All this study is saying is that people who have deep emotional ties to a belief will try very hard not to have to give up that belief. I realize that the example used for the study may irritate some of our readers, but, even so, the results and conclusions shouldn’t come as any surprise and I hope that the example chosen will not lead some to reject the conclusions out of hand. In fact, we see the same sorts of “inferred justification” all the time in those who should know better regarding all sorts of topics. If Tom Harkin, for example, were to take a similar survey followed by a challenge interview chock full of posts from SBM that showed the “alternative medicine” that he believed in doesn’t work, I have no doubt that it would have no effect on his advocacy for NCCAM (in fact, it didn’t; when confronted with a string of negative studies from NCCAM, Harkin simply complained that NCCAM’s studies are all negative) or his trying to slip provisions into the health care reform bill that would require the government to pay for “alternative” medicine.

Be that as it may, let me give you an excellent example in a realm we deal with regularly: the antivaccine movement. Let’s get a bit more specific than that. In the U.S., in the late 1990s through the middle part of this decade, the anti-vaccine movement latched on to the mercury-containing preservative (thimerosal) used in many childhood vaccines as a cause of autism. They based this belief on the correlation between a rise in the number of autism diagnoses beginning in the early to mid-1990s and the expansion of the vaccine schedule to include more vaccines, even going so far as to call the rise in autism diagnoses an “autism epidemic.” In 1999, despite little evidence that thimerosal-containing vaccines (TCVs) were in any way associated with autism or other harm, authorities ordered the phase-out of TCVs in favor of thimerosal-free alternatives. By the end of 2001, the only childhood vaccines still containing thimerosal were flu vaccines, and few children received them. The total dose of mercury received by children due to vaccines plummeted precipitously to levels not seen since the 1980s. And what happened to autism diagnoses?

They continued to rise, which is why, very early on in my tenure here at SBM, I wrote a post entitled Mercury in vaccines as a cause of autism and autism spectrum disorders (ASDs): A failed hypothesis. It’s unequivocal that autism rates have continued to rise since 2001, and alternative explanations for the rise in autism rates are quite compelling, specifically broadening of the diagnostic criteria, diagnostic substitution, and increased awareness, along with evidence that, correcting for these changes, the “true” prevalence of autism has probably not changed much over decades; i.e., there is no “epidemic” of autism. In response, some who cling to the mercury hypothesis claim that even a trace of mercury would still cause this autism “epidemic,” failing to explain why we didn’t see such an epidemic before decades ago when thimerosal was first used in a relatively few vaccines. Other members of the anti-vaccine movement have moved on to more difficult-to-falsify hypotheses, such as different “toxins” in vaccines (the latest of which is squalene) or the concept that when it comes to vaccines we are giving “too many too soon.” The reason, of course, is because it is the belief that vaccines cause autism and all sorts of other harm that is driving the anti-vaccine movement. When more and more evidence fails to support this belief, rather than giving up the belief, the anti-vaccine movement either ignores the data and moves on to another hypothesis, expecting science to play Whac-A-Mole with each new outlandish claim; dismisses or minimizes it as being due to a conspiracy by big pharma to hide The Truth About Vaccines; or finds a way to twist the science to be either neutral or even supportive of its ideas. In the process, the anti-vaccine movement does what this study describes very well: It infers justification by recursively inventing links between vaccines and autism.

Another good example is the reaction to revelations about Andrew Wakefield. Andrew Wakefield, as you may recall, is the British gastroenterologist who in 1998 published a study in The Lancet that claimed to find a link between the MMR vaccine and “autistic enterocolitis.” This study, aided and abetted by truly irresponsible journalism, launched a panic in the U.K. that is only now starting to abate. In the interim, measles, once thought conquered, has become endemic again in the British Isles. In any case, it matters not to the anti-vaccine movement that (1) Wakefield’s Lancet study was poorly designed and utterly refuted by later studies; (2) investigative journalist Brian Deer discovered and published that Wakefield received £435,643 in fees, plus £3,910 expenses from lawyers trying to show that the MMR was unsafe; (3) the PCR laboratory that Wakefield used for his work was so poorly run that it apparently had no knowledge of the concept of a negative control; and (4) in 2009 Brian Deer unearthed and published evidence that strongly suggests that Wakefield had falsified data in his 1998 Lancet paper. Instead of abandoning the hypothesis that the MMR vaccine somehow causes autism, adherents cling all the more tightly to it, claim all the contradicting data were all a plot by the government and pharmaceutical companies to discredit Wakefield and suppress The Truth About Vaccines; and even go so far as to circulate “We support Dr. Andrew Wakefield” petitions around the Internet. Meanwhile, concerned that a TV special report by NBC’s Dateline would show Wakefield in an unfavorable light, the anti-vaccine propagandists at Age of Autism preemptively launched a strike in the form of many of the fawning posts about Wakefield and attacks on Brian Deer they’ve published over the last couple of years. Thus is the cognitive dissonance that must occur as each new revelation about Wakefield’s incompetence, conflicts of interest, and scientific fraud leaks out. In fact, this is the usual M.O. when it comes to science looking at the claims of any pseudoscientist, be it Hulda Clark, Mark and David Geier, Tullio Simoncini, or whoever the woo-meister du jour is.

Finally, I think it’s worth looking at what the authors concluded in this study:

The main theoretical implication of our research is that “knowledge” as measured on surveys is partly a by-product of the attempt to resolve the social psychological problem of cognitive dissonance. The practical implication of this is that, although scholars have shown a correlation between the perception of links between Iraq and Al Qaeda and support for the war in Iraq, we cannot conclude from this correlation that misinformation led to support for the war. Rather, for at least some respondents, the sequence was the other way around: support for the war led to a search for a justification for it, which led to the misperception of ties between Iraq and 9/11. This suggests a mechanism through which motivated reasoning may be strongest when the stakes are highest . It is precisely because the stakes of going to war are so high that some of our respondents were willing to believe that “there must be a reason.”

In other words, the stronger the emotion behind the belief, the more likely a person is to fall into the trap of using cognitive errors to justify that belief. The key phrase is in the title of the article and in the conclusion, and that phrase is “there must be a reason.” Think about it and how often we hear that sort of a statement in the context of topics relevant to SBM. For example, “there must be a reason” that:

  • my child has autism (it’s the vaccines).
  • there are so many children with autism (it’s the vaccines).
  • there is not yet a cure for cancer (big pharma’s holding out on us to protect its profits).
  • my back pain got better (it must be the acupuncture, not placebo)
  • I rejected chemotherapy for my breast cancer and I’m still alive (chemotherapy is useless and “natural healing” is better).

The list goes on, and all of these are very emotionally-charged topics. After all, in the case of the vaccine-autism belief, what could be more emotional than the bond of a parent to her child, for example? This study doesn’t really break any major new ground, but it does remind me of what I’ve known, namely that, as Richard Feynman once famously said, “The first principle is that you must not fool yourself – and you are the easiest person to fool.” Besides all the ways we can fool ourselves listed by Harriet Hall, trying to minimize cognitive dissonance and using inferred justification are but two more.

It also make me wonder about two things. First, is there a way of taking advantage of these psychological mechanisms to persuade people who hold pseudoscientific views to accept science? In other words, can cognitive dissonance be reduced in a way that doesn’t require a person to reject science and cling ever more tenaciously to pseudoscience and invent conspiracy theories? Second, what beliefs do I hold that are more akin to inferred reasoning or strategies to reduce cognitive dissonance than beliefs based on strong science and firm evidence? I’m just as human as any of the participants in this study. Indeed, any skeptic who thinks he or she is not just as prone to such errors in thinking is not a skeptic but suffering from self-delusion. The only difference between skeptics and non-skeptics, scientists and nonscientists, in this regard is that skeptics try to make themselves aware of how human thinking can go wrong and then act preemptively to try to keep those normal human cognitive quirks from leading them astray. Indeed, guarding against these normal human failings when it comes to making conclusions about the natural world is the very reason we need science and why we need to base our medicine on science.

REFERENCE:

1. Prasad, M., Perrin, A., Bezila, K., Hoffman, S., Kindleberger, K., Manturuk, K., & Powers, A. (2009). “There Must Be a Reason”: Osama, Saddam, and Inferred Justification Sociological Inquiry, 79 (2), 142-162 DOI: 10.1111/j.1475-682X.2009.00280.x

Posted in: Clinical Trials, Politics and Regulation, Science and Medicine, Vaccines

Leave a Comment (10) ↓

10 thoughts on ““There must be a reason,” or how we support our own false beliefs

  1. kausikdatta says:

    You have made a very pertinent post. As I mentioned to you a few weeks back, my parents in India, as well as various other members of my family, are hooked on to homeopathy as the panacea of all the medical troubles in their old age (dotage is more like it). They persist in making the exact same of cognitive errors (of the ‘there must be a reason’ kind) that you mentioned.

    To find an answer to the first question you raised in the last paragraph, essentially asking whether there can be any ‘redemption’ – so to speak – for those people that place a high premium on pseudoscience, I thought long and hard. My answer – I find – is a mixture of ‘yes’ and ‘no’.

    No, because – at least from my personal experience – most of these people are so far entrenched in their beliefs that no amount of reasoning or evidence would persuade them to think in the contrary. Oftentimes, these beliefs become central to their existence as individuals – which makes it nearly impossible to get rid of them.

    The Yes part OTOH comes from being optimistic. The only proviso it requires is the institution of science education, and education about methodology of science, so that rational thinking and the ability to look at, and understand, empirical evidence are ingrained in the next generation. Only when they are armed with these tools, will they be able to make informed choices. This is, of course, a privilege that was denied to many of my parents’ generation.

  2. Sam Homola says:

    Great post for a science-based blog! It helps us understand the mindset of those who do not yield to science and reason in defending such delusional beliefs as homeopathy and creationism. It also gives us pause to step back and take a look at our own beliefs and the reasons for our actions.

  3. Harriet Hall says:

    Coincidentally, I have just finished reading “The Science of Fear: Why We Fear the Things We Shouldn’t – and Put Ourselves in Greater Danger” by Daniel Gardner. It explains the conflict between “head” and “gut” and how “gut” almost always wins. It covers all kinds of fascinating psychological research similar to the study David describes, studies that elucidate our biases and errors, and it explains how the media and politicians exploit them. When the smallpox scare erupted after 9/11, a study found that most Americans did not know that smallpox had been eradicated: 63% thought there had been infections somewhere in the world in the previous five years, and 30% believed there had been cases in the US.

  4. Dave Voelker says:

    Good topic for a science-based medicine blog, David. Or indeed for any site concerned with the vagaries of human reasoning, to which, as you correctly point out, all of us who have human brains are prone.

    I have some familiarity with this topic (Ph.D., communication). Cognitive dissonance per se is no longer used as a theoretical framework. Its general premise is still considered valid; the field has just evolved past its original, simple formulation into the study of cognitive biases and errors, which can take a variety of forms. There is a considerable body of research illustrating our (humans’) bias toward receiving information that supports existing beliefs or can be interpreted in a manner consistent with them, and screening out or discounting information that doesn’t. (The Sociological Inquiry study, at least as described in the Science Daily recap, seems unfamiliar with that corpus.)

    Unfortunately, there is no general prescription for talking others out of their fallacious beliefs (that’s actually another research area: persuasion). One general remedy is more emphasis in our educational curricula on teaching critical thinking skills. The way I like to put this is that humans are naturally predisposed to Type I error – forming beliefs about relationships in the external world that are not in fact true. In our evolutionary past, that bias had adaptive value – false positives were an OK price to pay for making sure we didn’t miss critical causal relationships (i.e., rustling grass signaling an approaching predator) whose failure for us to grasp could have been fatal.

    This human bias is the reason we lean in the opposite direction when we conduct research, drawing conclusions so conservatively that we deliberately risk failing to confirm relationships that do in fact exist (Type II error). The bias of our methodology counterbalances our natural bias. My point is that it would behoove society if critical thinking were part of a general education, and not something thought only necessary for professional researchers to learn.

    I’m personally fascinated in this topic and have just launched a blog called Two Realities (http://tworealities.org) in which I explore the idea that our beliefs, whether true or not, actually constitute a second reality for us every bit as important, if not more so, than the objective world.

  5. David Gorski says:

    Thanks for the info. As you may have figured out, I’m not a psychologist or sociologist; so my understanding of cognitive psychology is rather basic.

  6. tmac57 says:

    I find it useful when debating a point to keep calm,even when attacked , and point out the fallacy (ad hominem, post hoc, straw man etc.) .In this way you not only model proper reasoning, but inform your opponent about what is faulty with their argument. This may eventually work it’s way into their consciousness, and cause them to use a different approach. A little humor helps as well, especially to disarm a strident opponent.

  7. Zetetic says:

    I heard a disturbing bit of news on NPR this morning… Tom Harkin is in line to assume leadership of the Healthcare Reform Committee! Senator Chris Dodd is taking over the committee immediately but will hand it off the Senator Harkin at a future date.

Comments are closed.