Articles

Why False Positive Results Are So Common In Medicine

Have you ever been surprised and confused by what seem to be conflicting results from scientific research? Have you ever secretly wondered if the medical profession is comprised of neurotic individuals who change their mind more frequently than you change your clothes? Well, I can understand why you’d feel that way because the public is constantly barraged with mixed health messages. But why is this happening?

The answer is complex, and I’d like to take a closer look at a few of the reasons in a series of blog posts. First, the human body is so incredibly complicated that we are constantly learning new things about it – how medicines, foods, and the environment impact it from the chemical to cellular to organ system level. There will always be new information, some of which may contradict previous thinking, and some that furthers it or ads a new facet to what we have already learned. Because human behavior is also so intricate, it’s far more difficult to prove a clear cause and effect relationship with certain treatments and interventions, due to the power of the human mind to perceive benefit when there is none (placebo effect).

Second, the media, by its very nature, seeks to present data with less ambiguity than is warranted. R. Barker Bausell, PhD, explains this tendency:

1. Superficiality is easier to present than depth.

2. The media cannot deal with ambiguity, subtlety, and diversity (which always characterizes scientific endeavors involving new areas of investigation or human behavior in general)

3. The bizarre always gets more attention than the usual.

The media is under intense pressure to find interesting sound bites to keep peoples’ attention. It’s not their job to present a careful and detailed analysis of the health news that they report. So it’s no wonder that a research paper suggesting that a certain herb may influence cancer cell protein expression in a Petri dish becomes: herb is new cure for cancer! Of course, many media outlets are more responsible in their reporting than that, but you get the picture.

And thirdly, the scientific method (if not carefully followed in rigorous, randomized, placebo-controlled trials) is a set up for false positive tests. What does that mean? It means that the default for your average research study (before it even begins) is that there will be a positive association between intervention and outcome. So I could do a trial on, say, the potential therapeutic use of candy bars for the treatment of eczema, and it’s likely (if I’m not a careful scientist) that the outcome will show a positive correlation between the two.

There are many reasons for false positive results (e.g. wrongly ascribing effectiveness to a given therapy) in scientific research. “Experimental artifacts” as they’re called, are very common and must be accounted for in a study’s design. For fun let’s think about how the following factors stack the deck in favor of positive research findings (regardless of the treatment being analyzed):

1. Natural History: most medical conditions have fluctuating symptoms and many improve on their own over time. Therefore, for many conditions, one would expect improvement during the course of study, regardless of treatment.

2. Regression to the Mean: people are more likely to join a research study when their illness/problem is at its worst during its natural history. Therefore, it is more likely that the symptoms will improve during the study than if they joined at times when symptoms were not as troublesome. Therefore, in any given study – there is a tendency for participants in particular to improve after joining.

3.  The Hawthorne Effect: people behave differently and experience treatment differently when they’re being studied. So for example, if people know they’re being observed regarding their work productivity, they’re likely to work harder during the research study. The enhanced results therefore, do not reflect typical behavior.

4. Limitations of Memory: studies have shown that people ascribe greater improvement of symptoms in retrospect. Research that relies on patient recall is in danger of increased false positive rates.

5. Experimenter Bias: it is difficult for researchers to treat all study subjects in an identical manner if they know which patient is receiving an experimental treatment versus a placebo. Their gestures and the way that they question the subjects may set up expectations of benefit. Also, scientists are eager to demonstrate positive results for publication purposes.

6. Experimental Attrition: people generally join research studies because they expect that they may benefit from the treatment they receive. If they suspect that they are in the placebo group, they are more likely to drop out of the study. This can influence the study results so that the sicker patients who are not finding benefit with the placebo drop out, leaving the milder cases to try to tease out their response to the intervention.

7. The Placebo Effect: I saved the most important artifact for last. The natural tendency for study subjects is to perceive that a treatment is effective. Previous research has shown that about 33% of study subjects will report that the placebo has a positive therapeutic effect of some sort.

So my dear readers – if the media wants to get your attention with exaggerated representations of research findings, and the research findings themselves are stacked in favor of reporting an effect that isn’t real… then how on earth are we to know what to make of health news? Luckily, R. Barker Bausell has explained all of this really well in his book and I will attempt to summarize the following principles in the next few posts:

1. The importance of credible scientific evidence

2. The importance of plausible scientific evidence

3. The importance of reproducible scientific evidence

Posted in: Science and the Media

Leave a Comment (10) ↓

10 thoughts on “Why False Positive Results Are So Common In Medicine

  1. superdave says:

    from the Onion
    “NEW YORK—Researchers at the Mount Sinai School of Medicine were hardly able to stifle their laughter Tuesday while administering a placebo to 25 patients participating in a single-blind trial of an experimental new emphysema drug. “Did you see Participant No. 425? He was like, ‘I think it’s really working, Doc,’” Dr. Lewis Rodriguez said to a team of snickering pulmonary specialists. “How gullible can you get? I can’t believe those guys think they’re actually getting CDDO-Im.” Although the trial is expected to run for two more months, Rodriguez told reporters that he almost could not wait to analyze the data, compile the results, publish the findings, and see the looks on their stupid faces.”

  2. durvit says:

    Dr Ben Goldacre has a very good discussion of the above points in his Bad Science book.

  3. isles says:

    The preference of journals for positive studies may also play a part in the public getting the impression that science is continually discovering new connections. It’s not that interesting to read about a finding that two things you wouldn’t think would be related are, in fact, not related, and a negative study is also probably less likely to get media coverage that would let the public know about it.

  4. Val Jones says:

    Excellent point, isles. I totally agree. :) I once suggested creating a database/journal of negative studies that would be indexed in PubMed (to make sure that there was a pipeline for their publication). I was laughed out of the board room unfortunately.

  5. “So it’s no wonder that a research paper suggesting that a certain herb may influence cancer cell protein expression in a Petri dish becomes: herb is new cure for cancer!”

    Actually, it’s often worse than that. It’s more like: A preliminary research paper is published suggesting that a certain chemical substance (Chem A) may influence cancer cell protein expression in a Petri dish. Herb/Food/Alcoholic Beverage B is known to contain Chem A.

    Headline becomes: “Herb/Food/Alcoholic Beverage B is new cure for cancer”, but no data exists to show that Herb/Food/Alcoholic Beverage B containing Chem A has effects similar to Chem A alone, no data exists that Chem A has any anti-cancer effects in humans, and perhaps Herb/Food/Alcoholic Beverage B has already been shown to increase cancer risk in humans due to a different chemical it contains.

    People then rush out and begin consuming Herb/Food/Alcoholic Beverage B like it was water and they lived in the desert. Herb/Food/Alcoholic Beverage B sales surge until it is shown to be more toxic than nuclear waste or until the next silver bullet comes along.

  6. durvit says:

    Ben Goldacre has a list of journals that accept negative studies (food of post).

  7. KarlS says:

    Patient selection bias comes to mind as a contributing factor.

    The investigator choosing patients with lower-risk disease … and younger/fitter patients more likely to inquire about studies and able to travel to the major centers … leading to a study population that’s not equivalent to the general patient population. Reasons that historical controls are unreliable.

Comments are closed.