Articles

Certainty versus knowledge in medicine

I don’t want knowledge. I want certainty!

— David Bowie, from Law (Earthlings on Fire)

If there’s a trait among humans that seems universal, it appears to be an unquenchable thirst for certainty. It is likely to be a major force that drives people into the arms of religion, even radical religions that have clearly irrational views, such as the idea that flying planes into large buildings and killing thousands of people is a one-way ticket to heaven. However, this craving for certainty isn’t expressed only by religiosity. As anyone who accepts science as the basis of medical therapy knows, there’s a lot of the same psychology going on in medicine as well. This should come as no surprise to those committed to science-based medicine because there is a profound conflict between our human desire for certainty and the uncertainty that is always inherent in so much of our medical knowledge. The reason is that the conclusions of science are always provisional, and those of science-based medicine arguably even more so than many other branches of science.

In fact, one of the hardest things for many people to accept about science-based medicine is that the conclusions of science are always subject to change based on new evidence, sometimes so much so that even those of us “in the biz” can become a bit disconcerted at the rate at which knowledge we had thought to be secure changes. For example, think of how duodenal peptic ulcer disease was treated 25 years ago and then think about how it is treated now. Between 1984 and 1994, a revolution occurred on the basis of the discovery of H. pylori as the cause of most of the gastric and peptic ulcer disease we see. Where in 1985 we treated PUD with H2-blockers and other drugs designed to block gastric acid secretion, now antibiotics represent the mainstay of treatment and are curative at a much higher success rate than any treatment other than surgery and without the complications of surgery. I’m sure any other physician here could come up with multiple other examples. In my own field of breast cancer surgery, I look back at how we treated breast cancer 22 years ago, when I first started residency, and how we treat it now, and I marvel at the changes. If such changes can be disconcerting even to physicians dedicated to science-based medicine, imagine how much more disconcerting they are to lay people, particularly when they hear news reports of one study that produces one result, followed just months later by a report of a different study that gives a completely different result.

We see this phenomenon of craving certainty writ large and in bold letters in huge swaths of so-called “alternative” medicine. Indeed, a lot of quackery, if not most of it, involves substituting the certainty of belief for the provisional nature of science in science-based medicine, as well as the uncertainty in our ability to predict treatment outcomes, particularly in serious diseases with variable biology, like several types of cancer.

Examples abound. Perhaps my favorite two examples include Hulda Clark, who attributed all cancer and serious disease to a common liver fluke, and Robert O. Young, who believes that virtually all disease is due to “excess acid.” So prevalent is this tendency that Harriet Hall once skewered it in a delightful post entitled The One True Cause of All Disease, where she listed a rather large samplings of things that various quacks have implicated as “the one true cause” of various diseases — or all diseases.

Time and time again, if you look carefully at “alt-med” concepts and the therapies that derive from those concepts, you find utter simplicity (or, more appropriately in many cases, simple-mindedness) tarted up with complicated-sounding jargon. Homeopathy, for instance, is at its heart nothing more than sympathetic magic, with its concept of “like cures like,” combined with the principle of contagion, with its concept that water somehow has a “memory” of the therapeutic substances with which it’s come in contact but, as Tim Minchin so hilariously put it, “it somehow forgets all the poo it’s had in it.” Reiki and other “energy healing” modalities can be summed up as “wishing makes it so,” with “intent” having the power to manipulate some fantastical life energy to heal people. It’s faith healing, pure and simple.

The simplicity of these concepts at their core makes them stubbornly resistant to evidence. Indeed, when scientific evidence meets a strong belief, the evidence usually loses. In some cases, it does more than just lose; the scientific evidence only hardens the position of believers. We see this very commonly in the anti-vaccine movement, where the more evidence is presented against a vaccine-autism link, seemingly the more deeply anti-vaccine activists dig their heels in to resist, cherry picking and twisting evidence, launching ad hominem attacks on their foes, and moving the goalposts faster than science can kick the evidence ball through the uprights. The same is true for any number of pseudoscientific beliefs. We see it all the time in quackery, where even failure of the tumor to shrink in response can lead patients to conclude that the tumor, although still there, still can’t hurt them. 9/11 Truthers, creationists, Holocaust deniers, moon hoaxers — they all engage in the same sort of desperate resistance to science.

Even those who in general accept science-based medicine can be prone to the same tendency to dismiss evidence that conflicts with their beliefs. A while back, I saw an article by Christie Aschwanden discussing just this problem. The article was entitled Convincing the Public to Accept New Medical Guidelines, and I feel it could almost have been written by Mark Crislip or myself, only minus Mark’s inimitable self-deprecating yet cutting sarcasm, or my own alleged talent for nastiness and ad hominem. (I guess I need to become more cuddly.) To set up its point that persuading people to accept the results of new medical science is exceedingly difficult, the article starts with the example of long distance runners who believe that taking ibuprofen (or “vitamin I”) before a long run reduces their pain and inflammation resulting from the run:

They call it “vitamin I.” Among runners of ultra-long-distance races, ibuprofen use is so common that when scientist David Nieman tried to study the drug’s use at the Western States Endurance Run in California’s Sierra Nevada mountains he could hardly find participants willing to run the grueling 100-mile race without it.

Nieman, director of the Human Performance Lab at Appalachian State University, eventually did recruit the subjects he needed for the study, comparing pain and inflammation in runners who took ibuprofen during the race with those who didn’t, and the results were unequivocal. Ibuprofen failed to reduce muscle pain or soreness, and blood tests revealed that ibuprofen takers actually experienced greater levels of inflammation than those who eschewed the drug. “There is absolutely no reason for runners to be using ibuprofen,” Nieman says.

The following year, Nieman returned to the Western States race and presented his findings to runners. Afterward, he asked whether his study results would change their habits. The answer was a resounding no. “They really, really think it’s helping,” Nieman says. “Even in the face of data showing that it doesn’t help, they still use it.”

As is pointed out, this is no anomaly. Aschwanden uses as another example a topic that’s become a favorite of mine over the last six months or so since the USPSTF released revised guidelines for mammographic screening. Take a look at what she says about the reaction:

This recommendation, along with the call for mammograms in women age 50 and older to be done every two years, rather than annually, seemed like a radical change to many observers. Oncologist Marisa C. Weiss, founder of Breastcancer.org, called the guidelines “a huge step backwards.” If the new guidelines are adopted, “Countless American women may die needlessly from breast cancer,” the American College of Radiology said.

“We got letters saying we have blood on our hands,” says Barbara Brenner, a breast cancer survivor and executive director of the San Francisco advocacy group Breast Cancer Action, which joined several other advocacy groups in backing the new recommendations. Brenner says the new guidelines strike a reasonable balance between mammography’s risks and benefits.

I discussed the guidelines in considerable detail twice. Let’s put it this way: I’m in the business, so to speak, and even I was shocked at the vehement reactions, not just from patients and patient advocacy groups, whose reaction I could completely understand (after many years of hearing that beginning mammography at age 40 was critical to save lives), but even from some of my very own colleagues. I was particularly disgusted by the reaction of the American College of Radiology, which was nothing more than blatant fear mongering that intentionally frightened women into thinking that the new guidelines would lead to their deaths from breast cancer. As much as we’d like to pretend otherwise, even science-based medical practitioners can fall prey to craving the certainty of known and accepted guidelines over the uncertainty of the new. And if it’s so hard to get physicians to accept new guidelines and new science, imagine how hard it is to get patients to accept them.

There is abundant evidence of how humans defend their views against evidence that would contradict them, and it’s not just observational evidence that you or I see every day. Scientists often fall prey to what University of California, Berkeley, social psychologist Robert J. MacCoun calls the “truth wins” assumption. This assumption, stated simply, is that when the truth is correctly stated it will be universally recognized. Those of us who make it one of our major activities to combat pseudoscience know, of course, that the truth doesn’t always win. Quite the contrary, actually, I’m not even sure the “truth” wins a majority of the time — or even close to a majority of the time. Moreover, most recommendations of science-based medicine are not “truth” per se; they are simply the best recommendations physicians can currently make based on current scientific evidence. Be that as it may, the problem with the “truth wins” viewpoint is that the “truth” often runs into a buzz saw known as a phenomenon that philosophers call naive realism. This phenomenon, boiled down to its essence is the belief that whatever one believes, one believes it simply because it’s true. In the service of naive realism, we all construct mental models that help us make sense of the world. When the “truth wins” assumption meets naive realism, guess what usually wins? It ain’t the truth.

At the risk of misusing the word, I’ll just point out something about our “truth”: that we all filter everything we learn through the structure of our own beliefs and the mental models we construct to support those beliefs. I like to think of science as a powerful means of penetrating the structure of those mental models, but that’s probably not a good analogy. That’s because, for science to work at changing our preconceptions, we have to have the validity of science already strongly incorporated into the structure of our own mental models. If it’s not, then science is more likely to bounce harmlessly off of the force field our beliefs create to repel it. (Sorry, I couldn’t help it; I’m a hopeless geek.) As a result, all other things being equal, when people see studies that confirm their beliefs they tend to view them as unbiased and well-designed, while if a study’s conclusions contradict a person’s beliefs that person is likely to see the study as biased and/or poorly done. As MacCoun puts it, “If a researcher produces a finding that confirms what I already believe, then of course it’s correct. Conversely, when we encounter a finding we don’t like, we have a need to explain it away.”

There’s also another strategy that people use to dismiss science that doesn’t conform to their beliefs. I hadn’t thought of this one before, but it seems obvious in retrospect after I encountered a recent study that suggested it. That mechanism is to start to lose faith in science itself as a means of making sense of nature and the world. The study was by Geoffrey D. Munro of Towson University in Maryland and appeared in the Journal of Applied Social Psychology under the title of The Scientific Impotence Excuse: Discounting Belief-Threatening Scientific Abstracts.

There were two main hypotheses and two studies included within this overall study. Basically, the hypothesis was that encountering evidence that conflicts with one’s beliefs system would tend to make the subject move toward a belief that science can’t study the hypothesis under consideration, a hypothesis known as the “scientific impotence” hypothesis or method. In essence, science is dismissed as “impotent” to study the issue where belief conflicts with evidence, thus allowing a person to dismiss the science that would tend to refute a strongly held belief. The problem, of course, is that the major side effect of asserting scientific impotence discounting is that it leads a person to distrust all science in general, or at least far more science than the science opposing that person’s belief.

Munro makes the implication of scientific impotence discounting plain:

The scientific impotence method of discounting scientific research that disconfirms a belief is certainly worrisome to scientists who tout the importance of objectivity. Even more worrisome, however, is the possibility that scientific impotence discounting might generalize beyond a specific topic to which a person has strong beliefs. In other words, once a person engages in the scientific impotence discounting process, does this erode the belief that scientific methods can answer any question? From the standpoint of the theory of cognitive dissonance (Festinger, 1957), the answer to this question could very well be “Yes.”

Not surprising, the scientific impotence discounting strategy of denying science permits one to dodge the charge of hypocrisy:

Using the scientific impotence excuse for one and only one topic as a result of exposure to belief-disconfirming information about that topic might put the individual at risk for having to acknowledge that the system of beliefs is somewhat biased and possibly hypocritical. Thus, to avoid this negative self-view, the person might arrive at the more consistent — and seemingly less biased — argument that science is impotent to address a variety of topics, one of which happens to be the topic in question.

To test these hypotheses, basically Munro had a group of students recruited for his study read various abstracts (created by investigators) that confirmed or challenged their beliefs regarding homosexuality and whether homosexuality predisposes to mental illness. It turned out that those who read belief-challenging abstracts were more prone to use scientific impotence discounting as an excuse to reject the science, while those who read belief-confirming abstracts were less likely to subscribe to the scientific impotence excuse. Controls that substituted other terms for “homosexual” demonstrated that it was the belief-disconfirming nature of the abstracts that was associated with use of scientific impotence discounting as a reason to reject the conclusions of the abstract. A second study followed up and examined more subjects. The methodology was the same as the first study, except that there were additional measures performed to see if exposure to belief-disconfirming abstracts was associated with generalization of belief in scientific impotence.

In essence, Munro found that, relative to those reading belief-confirming abstracts, participants reading belief-disconfirming abstracts indicated more belief that the topic they were reading about could not be studied scientifically and more belief that a series of other unrelated topics also could not be studied scientifically. In other words, scientific impotence discounting appears to represent a generalization of discounting of science from just science that challenges a person’s beliefs to more areas of science, if not all science. Munro concluded that being presented with belief-disconfirming scientific evidence may lead to an erosion of belief in the efficacy of scientific methods, also noting:

A number of scientific issues (e.g., global warming, evolution, stem-cell research) have extended beyond the scientific laboratories and academic journals and into the cultural consciousness. Because of their divisive and politicized nature, scientific conclusions that might inform these issues are often met with resistance by partisans on one side or the other. That is, when one has strong beliefs about such topics, scientific conclusions that are inconsistent with the beliefs may have no impact in altering those beliefs. In fact, scientific conclusions that are inconsistent with strong beliefs may even reduce one’s confidence in the scientific process more generally. Thus, in addition to the ongoing focus on creating and improving techniques that would improve understanding of the scientific process among schoolchildren, college students, and the general population, some attention should also be given to understanding how misconceptions about science are the result of belief-resistance processes and developing techniques that might short-circuit these processes.

On a strictly anecdotal level, I’ve seen this time and time again in the alt-med movement. A particularly good example is homeopathy. How many times have we seen homeopaths, when confronted with scientific evidence finding that their magic water is no more effective at anything than a placebo, claiming that their magic cannot be evaluated by randomized, double-blind clinical trials (RCTs)? The excuses are legion: RCTs are too regimented; they don’t take into account the “individualization” of homeopathic treatment; unblinded “pragmatic” trials are better; or the homeopaths’ anecdotal evidence trumps RCT evidence. Believers in alt-med then often generalize this scientific impotence discounting to many other areas of woo, claiming, for example, that science can’t adequately measure that magical mystical life energy field known as qi or even, most incredibly, that subjecting their woo to science will guarantee it to fail because belief is required and skepticism results in “negative energy.” Another common strategy I’ve seen for scientific impotence discounting is to dismiss science as “just another religion,” just as valid as whatever woo science is refuting, or to label science as “just another belief system,” as valid as any other. In other words, postmodernism!

Sadly, though, even physicians ostensibly dedicated to science-based medicine all too easily fall prey to this fallacy, although they usually don’t dismiss science as being inadequate or unable to study the question in question. Rather, they wield their preexisting belief systems and mental frameworks like a talisman to protect them from having to let disconfirming data force them to change their beliefs. Alternatively, they dismiss science itself as “just another belief.” Perhaps the most egregious example I’ve seen of this in a long time occurred, not surprisingly, over the mammogram debate from six months ago, when Dr. John Lewin, a breast imaging specialist from Diversified Radiology of Colorado and medical director of the Rose Breast Center in Denver, so infamously said, “Just the way there are Democrats and Republicans, there are people who are against mammography. They aren’t evil people. They really believe that mammography is not important.”

Wow! A straw man argument (that those who support the USPSTF guidelines are “against mammography”) combined with likening science to just another political viewpoint, and a condescending disclaimer that those who disagree with him “aren’t evil”! Mike Adams couldn’t have said it better. I wonder if Dr. Lewin thinks that Dr. Susan Love is “against mammography,” given that in the very same article it was pointed out that Dr. Love supports the USPSTF guidelines.

I get it. I really do. I get how hard it is to change one’s views. I even understand the tendency to dismiss disconfirming evidence. What I like to think distinguishes me from pseudoscientists is that I do change my mind on scientific issues as the evidence merits. Perhaps the best example of this is the aforementioned USPSTF mammography screening kerfuffle. For the longest time, I agreed enthusiastically with the prevailing medical opinion that screening for breast cancer with mammography beginning at age 40 was almost completely a universal good. Then, over the last two or three years, I’ve become increasingly aware of the problem of lead time and length bias, the Will Rogers effect, and overdiagnosis. This has led me to adjust my views about screening mammography. I haven’t adjusted them all the way to the USPSTF recommendations, but I am much more open to changes in the guidelines published late last year, even to the point that encountering the resistance of my colleagues led me to feel as though I were an anomaly.

Skepticism and science are hard in that they tend to go against some of the most deeply ingrained human traits there are, in particular the need for certainty and an intolerance of ambiguity. Also in play is our tendency to cling to our beliefs, no matter what, as though having to change our beliefs somehow devalues or dishonors us. Skepticism, critical thinking, and science can help us overcome these tendencies, but it’s difficult. Perhaps that’s the most important contribution of the scientific method. It creates a structure that allows us to change our beliefs about the world based on evidence and experimentation without the absolute necessity of taking being proven wrong personally.

When the scientific method is really embedded in the culture of scientists, it leads to the sorts of behavior described by Richard Dawkins where a scientist gives a talk with very solid evidence supporting his conclusions. That evidence, it just so happened, completely disconfirmed the long-held hypothesis championed by a very senior and respected member of his department. When the talk was over, everyone waited to see what this senior scientist would say. Instead of challenging the speaker, the scientist got up and thanked him for having shown him that he had been wrong for these many years because a phenomenon he was interested in was now better understood. True, the story may have been apocryphal or exaggerated, but that is the ideal of science. It is an ideal that is very hard, even for scientists, to live up to, and it’s even harder for non-scientists even to understand.

In the end, though, we need to strive to live up to the immortal words of Tim Minchin when describing how he’d change his mind about even homeopathy if presented with adequate evidence. Actually, as much as I love Tim Minchin’s humorous take on dealing with a woman espousing a panoply of woo that would rival Whale.to’s collection, I’m forced to realize that Minchin is a bit too flippant about the difficulty in changing one’s mind. I know, I know, he’s a comic musician (or a musical comedian); flippancy is part of his job. Even so, show me, for example, strong evidence that vaccines are associated with autistic regression, and I might not “spin on a dime” and change my beliefs, as Minchin put it, but eventually, if the evidence is of a quality and quantity to cast serious doubt on the existing scientific evidence that does not support a vaccine-autism link, I will adjust my views to fit the evidence and science. I’m also under no illusion about how difficult that would be to do or that I might even be prone to using some of the defense mechanisms described by psychologists to avoid doing that, at least at first. But I’ve changed my mind before based on science disconfirming medical dogma that I had long believed on more than one occasion. I can (and no doubt will) do it again.

In the end, I want knowledge, both to provide the best care possible for my patients and just for knowledge’s sake. Science is the best way to get that knowledge about the natural world, and no other methodology has improved the treatment of human disease as fast as science-based medicine. The price is one that I’m willing to pay: Uncertainty and the expectation that much of what I “know” now will one day have to change. What they told me my first day in medical school really is true. Twenty years after I graduated, at least half of what I was taught in medical school has changed or is no longer applicable to patient care, and that’s a good thing.

Certainty is nice, but I’ve learned to live without it.

REFERENCE:

Munro, G. (2010). The Scientific Impotence Excuse: Discounting Belief-Threatening Scientific Abstracts Journal of Applied Social Psychology, 40 (3), 579-600 DOI: 10.1111/j.1559-1816.2010.00588.x

Posted in: Medical Academia, Neuroscience/Mental Health, Science and Medicine

Leave a Comment (13) ↓

13 thoughts on “Certainty versus knowledge in medicine

  1. Harriet Hall says:

    I’d rather be uncertain than be certain and wrong.

    I reviewed a pertinent book at http://www.sciencebasedmedicine.org/?p=103
    On Being Certain: Believing You Are Right Even When You’re Not, by Robert Burton.

    I am absolutely certain that its thesis is correct. :-)

  2. weing says:

    Excellent post. It reminds me of an interchange regarding this epistemological problem between the great philosophers, Curly and Moe.

    Moe: “I’m positive.”
    Curly: “Only fools are positive.”
    Moe: “Are you sure?”
    Curly: “I’m positive.”

  3. BillyJoe says:

    “We see it all the time in quackery, where even failure of the tumor to shrink in response can lead patients to conclude that the tumor, although still there, still can’t hurt them.”

    I know a man who is convinced of the power of visualisation. He has a brain tumour and and visualises his immune system destroying the tumour. And he is convinced it is now gone. At one point he asked his GP to do a scan as corroborative evidence. And apparently it was completely gone – only the shadow of his tumour still remained!

  4. tyro says:

    re homosexuality & mental illness – when people are presented with a study which shows results which are contrary to their belief, is it really so irrational to question, doubt or disregard it? Many studies really are overturned, some really are shoddy and ideologically driven, so if results do differ significantly from what we’ve learned to date, perhaps the right reaction is to withhold acceptance.

    Take the mice accupuncture study that was reviewed recently. When I saw the headlines and some write-ups, I though “balderdash” and discarded the results even though I hadn’t read the study. I did look for others’ evaluation and for further follow-up and I think my initial views of accupuncture were informed by several good lines of evidence so I felt fairly secure that one result wasn’t going to overturn it overnight.

    Was I wrong?

    I understand that there are differences between initially doubting a study which violently conflicts with earlier scientific results and rejecting a body of evidence which conflicts with magical beliefs but initially doubting results which are at odds with our beliefs isn’t necessarily a bad thing.

  5. Thought provoking article. Thanks! Regarding the homosexuality sociology experiment, My reaction was similar to Tyro’s. To an extent, questioning something that is counter-intuitive can be good. On the other hand, I have been fooled by misinformation that I didn’t think to check because “Yeah, that sounds right.”

    Also, this gives me the opportunity to share one of my favorite quotes. “A foolish consistency is the hobgoblin of little minds.” Ralph Waldo Emerson.

    A good reminder for someone as stubborn as me.

  6. Sastra says:

    Ironically, one of the major charges against so-called “scientism” is that it’s supposed to be arrogant, and its advocates smug. Scientists are supposed to be too sure of themselves, thinking that their methods are somehow better than the more humble methods of personal experience, ancient wisdom, mystical revelation, etc.

    Those other ways of knowing both grant certainty — and somehow manage to absolve the Absolutely Certain of the charge of arrogance.

  7. tyro asked, “[W]hen people are presented with a study which shows results which are contrary to their belief, is it really so irrational to question, doubt or disregard it?”

    To question it, yes. Absolutely. Especially when you read something in the newspaper. Go back, read the original study, see what they were studying and what it showed. But the issue was not that people were questioning how the conclusions were reached; they decided that the scientific method, generally, was incapable of answering questions, generally. To discount the entire scientific method because you don’t like one conclusion of one study is not at all rational. What would be more logical – but less in harmony with most humans’ nature – would be to say, “Isn’t science cool! It can discover things I’d never have thought of.”

    “It turned out that those who read belief-challenging abstracts were more prone to use scientific impotence discounting as an excuse to reject the science, while those who read belief-confirming abstracts were less likely to subscribe to the scientific impotence excuse. … In [a second study], Munro found that, relative to those reading belief-confirming abstracts, participants reading belief-disconfirming abstracts indicated more belief that the topic they were reading about could not be studied scientifically and more belief that a series of other unrelated topics also could not be studied scientifically.”

    No, not rational.

    Yesterday my dentist told me that 85% of tooth grinders (of which I am one) are side-sleepers. If I were a back-sleeper, it would make no sense to conclude, “Science can’t study tooth-grinding and sleeping positions.” It would make sense to ask the question I asked, which is, “What percentage of non-tooth-grinders are side-sleepers?” It would also have made sense to conclude that I was in the 15%. But no, not to conclude that “Science doesn’t work for dentistry.”

    We studied Ionesco’s Rhinoceros in English class in my Christian high school. It contained the faux-syllogism, “Cats have four legs. Fido has four legs, therefore Fido is a cat.” One of my classmates concluded that this proved that logic doesn’t work and therefore that we might as well have faith in the Bible rather than trying to rely on unreliable thought. This was not a rational conclusion, though it was consistent with her world-view.

  8. Looks like I can’t get full access to the “Scientific Impotence” article without paying, so I’m just going to wing it.

    abstract “participants read a series of brief abstracts that either confirmed or disconfirmed their existing beliefs about a stereotype associated with homosexuality. Relative to those reading belief-confirming evidence, participants reading belief-disconfirming evidence indicated more belief that the topic could not be studied scientifically and more belief that a series of other unrelated topics could not be studied scientifically.”

    Without knowing the actual abstracts that confirmed or disconfirmed their beliefs AND without knowing the other unrelated topics, it’s very hard to know if the participants concluded erroneously that the topic could not be studied scientifically.

    I think we have talked about in SBM comments how the subjective nature of some topics can make it very difficult to gather objective data.

    So what were the stereotypes associated with homosexuality? Was it something objective, say professions or kinda subjective, like gender roles or oppositional behavior?

    Likewise were the unrelated topics something like the study of global warming or stem cells or were they the study of IQ and race. Well, not that, I’m guessing, but maybe ‘the healing value of art.’ You get my drift?

    My point being science IS better at some things. It is reasonable in some cases, to conclude that science contributes less to a certain topic (impotent, I think, is an overstatement.) If you primed a subject with an example of one of those topics, that subject may easily think of other similar topics.

    This is not to dispute the actual study…which I could not read.

    And who knows, maybe I just proved Dr. Gorski’s point by thinking of science as impotent (is some regards). I’ll try to get over it.

  9. This makes me wonder, though. I have often heard that history has shown a pendulum swing between wide spread public support in science and wide spread public distrust of science. If the human psyche consistently and automatically starts to distrust science when it conflicts with our beliefs, how do we end up with the cyclic nature of trust/distrust in public opinion?

    Just a thought. Not sure what to do with it, so I put it here.

  10. swienke says:

    @micheleinmichigan

    I think that the cyclic nature of public belief in science is probably just a good old case of cognitive dissonance. Granted, I’m not terribly well versed in sociology or psychology (so I may not be using the right terminology) and this is just my musings on the subject, but I find that most people have a hard time trusting that which they don’t understand. For many laypeople, science is an imposing and incomprehensible black box and so it cannot be trusted. At the same time, however, most people can’t deny the innumerable benefits that science has brought: automobiles, telephones, sanitation, modern medicine, computers, etc. The resulting cognitive dissonance means that people can often be easily swayed to one extreme or another due to personal experiences or peer pressure, and once they get to those extremes mental inertia keeps them from quickly changing their minds.

    Of course, this is just me theorizing: I don’t really have anything to back it up with.

  11. mikee says:

    “when people are presented with a study which shows results which are contrary to their belief, is it really so irrational to question, doubt or disregard it?”

    @tyro

    to question and doubt, is important, because as you have pointed out there are some shoddy studies that make it through peer review processes. However, I become quite frustrated when research is disregarded out of hand because it doesn’t mesh with one’s previous position- that it hardly the action of a sceptical thinker.
    If you look at many of the debates occurring around climate change, for example, one of the problems I see is that people are too quick to disregard some of the valid arguments of their opponents.

  12. BillyJoe says:

    “If you look at many of the debates occurring around climate change, for example, one of the problems I see is that people are too quick to disregard some of the valid arguments of their opponents.”

    That’s probably because there’s hardly ever is a valid argument.

    And, on the other hand, see how quickly the opponents disregard the numerous valid arguments of climate change.

  13. JMB says:

    From citations listed on the USPSTF webwite,

    http://www.ahrq.gov/clinic/uspstf/uspsbrca.htm

    Evidence Update Article.

    http://www.ahrq.gov/clinic/uspstf09/breastcancer/brcanup.htm

    In the abstract of the article,

    “Conclusion: Mammography screening reduces breast cancer mortality for women aged 39 to 69 years; data are insufficient for older women. False-positive mammography results and additional imaging are common. No benefit has been shown for clinical breast examination or breast self-examination.”

    No paraphrasing needed.

    The last paragraph of the evidence update article,

    “Our meta-analysis of mammography screening trials indicates breast cancer mortality benefit for all age groups from 39 to 69 years, with insufficient data for older women. False-positive results are common in all age groups and lead to additional imaging and biopsies. Women aged 40 to 49 years experience the highest rate of additional imaging, whereas their biopsy rate is lower than that for older women. Mammography screening at any age is a tradeoff of a continuum of benefits and harms. The ages at which this tradeoff becomes acceptable to individuals and society are not clearly resolved by the available evidence.”

    No paraphrasing needed.

    Supporting Article (computer model simulations) please go to the PDF

    http://www.ahrq.gov/clinic/uspstf09/breastcancer/brcanart.pdf

    From the section on validations of computer models,

    “Each model has a different structure and assumptions and some varying input variables, so no single method can be used to validate results against an external gold standard. For instance, because some models used results from screening trials (or SEER [Surveillance, Epidemiology and End Results] data) for calibration or as input variables, we cannot use comparisons of projected mortality reductions to trial results to validate all of the models. In addition, we cannot directly compare the results of this analysis, which uses 100% actual screening for all women at specified intervals, with screening trial results in which invitation to screening and participation varied. In our previous work (7,9-11,13-15), results of each model accurately projected independently estimated trends in the absence of intervention and closely approximated modern stage distributions and observed mortality trends. Overall, using 6 models to project a range of plausible screening outcomes provides implicit cross-validation, with the range of results from the models as a measure of uncertainty. ”

    To paraphrase, they have only weak scientific validation (based on agreement of the models). Furthermore, the variation in results between the six models cannot be used to give any parameter of error of estimation, it can only be stated as variation between models.

    From Table 4. (same article) Benefits and Harms Comparison of Different Starting and Stopping Ages Using the Exemplar Model*, using only one of the six models, the Stanford model.

    Cancer deaths averted per 1000 women, 8.3 for annual screening from age 40-69, and 5.4 for biennial screening age 50-69.

    Since there are about 59 million women in the USA aged 40 to 70, breast cancer deaths averted by annual screening from age 40 – 69 is 489,700. Deaths averted by screening biennial from 50-69, 318,600. Excess number of breast cancer deaths in the lifetime for women screened by the USPSTF strategy, 171,100.
    These are deaths occurring over 40 years (assumed average life expectancy of the age group), or about 4,200 deaths per year. Of course this assumes 100% compliance, typically we only see 60% compliance (and less now that the USPSTF released its recommendations). There may be a ten year delay for the increase to reach that level. If treatment improves, or if the reported vaccine is effective, then those effects might cover up the increase. If treatment or the vaccine does not result in a significant improvement, we would probably ‘only’ see about 1000 to 2000 extra breast cancer deaths every year if the USPSTF recommendations are adopted. The weakness in predicting an increase in mortality partly comes from weak estimates of compliance with the current recommended annual screening schedule.

    The term efficient frontier originates from Markowitz’s Nobel prize (in Economics) theory on Efficient Frontier Analysis. In the mathematical formulation, risk is a metric derived from the volatility of the stock value, and has no relation to what we measure as risk in medicine. The classic efficient frontier graph has a parabolic shape. We won’t see a Markowitz bullet shape in this medical data. The USPSTF makes use of concepts of the efficient frontier in their analysis of the results of the computer simulations. The USPSTF selected the most efficient strategy, not the most effective strategy.

    Based on the articles listed on the USPSTF website, the scientific evidence states that screening mammography can reduce breast cancer deaths in the age group 40 to 69. The computer models predicts that we can reduce the number of screening mammograms by 50% while maintaining “70% to 99% of the benefit of annual screening” with biennial screening. The exemplary (cited in the article with more specific information) Stanford model predicts that we would see about 4000 more breast cancer deaths per year if women followed the recommendation of biennial screening from age 50-69, as opposed to annual screening from age 40-69 (based on the assumption of 100% compliance).

    This calculation comes from the evidence and computer simulations cited on the USPSTF website (and the best census data I could find). What number would you calculate as the difference in mortality expected with the new guidelines? Do you still consider the USPSTF recommendations to be the best science based medicine? To do so, you must accept the predictions of the computer models (which have weak scientific validation), and the value judgments of the authors.

Comments are closed.