Articles

Research, Minus Science, Equals Gossip

“A person is smart. People are stupid.”

- Agent K (Tommy Lee Jones), Men In Black

Regular readers of my blog know how passionate I am about protecting the public from misleading health information. I have witnessed first-hand many well-meaning attempts to “empower consumers” with Web 2.0 tools. Unfortunately, they were designed without a clear understanding of the scientific method, basic statistics, or in some cases, common sense.

Let me first say that I desperately want my patients to be knowledgeable about their disease or condition. The quality of their self-care depends on that, and I regularly point each of them to trusted sources of health information so that they can be fully informed about all aspects of their health. Informed decisions are founded upon good information. But when the foundation is corrupt – consumer empowerment collapses like a house of cards.

There is growing support in the consumer-driven healthcare movement for a phenomenon known as “the wisdom of crowds.” The idea is that the collective input of a large number of consumers can be a driving force for change – and is a powerful avenue for the advancement of science. It was further suggested (in a recent lecture on Health 2.0), that websites that enable patients to “conduct their own clinical trials” are the bold new frontier of research. This assertion betrays a lack of understanding of basic scientific principles. In healthcare we often say, “the plural of anecdote is not data” and I would translate that to “research minus science equals gossip.” Let me give you some examples of Health 2.0 gone wild:

1. A rating tool was created to “empower” patients to score their medications (and user-generated treatment options) based on their perceived efficacy for their disease/condition. The treatments with the highest average scores would surely reflect the best option for a given disease/condition, right? Wrong. Every single pain syndrome (from headache to low back pain) suggested a narcotic was the most popular (and therefore “best”) treatment. If patients followed this system for determining their treatment options, we’d be swatting flies with cannon balls – not to mention being at risk for drug dependency and even abuse. Treatments must be carefully customized to the individual – genetic differences, allergy profiles, comorbid conditions, and psychosocial and financial considerations all play an important role in choosing the best treatment. Removing those subtleties from the decision-making process is a backwards step for healthcare.

2. An online tracker tool was created without the input of a clinician. The tool purported to “empower women” to manage menopause more effectively online. What on earth would a woman want to do to manage her menopause online, you might ask? Well apparently these young software developers strongly believed that a “hot flash tracker” would be just what women were looking for. The tool provided a graphical representation of the frequency and duration of hot flashes, so that the user could present this to her doctor. One small problem: hot flash management is a binary decision. Hot flashes either are so personally bothersome that a woman would decide to receive hormone therapy to reduce their effects, or the hot flashes are not bothersome enough to warrant treatment. It doesn’t matter how frequently they occur or how long they last. Another ill-conceived Health 2.0 tool.

When it comes to interpreting data, Barker Bausell does an admirable job of reviewing the most common reasons why people are misled to believe that there is a cause and effect relationship between a given intervention and outcome. In fact, the deck is stacked in favor of a perceived effect in any trial, so it’s important to be aware of these potential biases when interpreting results. Health 2.0 enthusiasts would do well to consider the following factors that create the potential for “false positives”in any clinical trial:

1. Natural History: most medical conditions have fluctuating symptoms and many improve on their own over time. Therefore, for many conditions, one would expect improvement during the course of study, regardless of treatment.

2. Regression to the Mean: people are more likely to join a research study when their illness/problem is at its worst during its natural history. Therefore, it is more likely that the symptoms will improve during the study than if they joined at times when symptoms were not as troublesome. Therefore, in any given study – there is a tendency for participants in particular to improve after joining.

3.  The Hawthorne Effect: people behave differently and experience treatment differently when they’re being studied. So for example, if people know they’re being observed regarding their work productivity, they’re likely to work harder during the research study. The enhanced results therefore, do not reflect typical behavior.

4. Limitations of Memory: studies have shown that people ascribe greater improvement of symptoms in retrospect. Research that relies on patient recall is in danger of increased false positive rates.

5. Experimenter Bias: it is difficult for researchers to treat all study subjects in an identical manner if they know which patient is receiving an experimental treatment versus a placebo. Their gestures and the way that they question the subjects may set up expectations of benefit. Also, scientists are eager to demonstrate positive results for publication purposes.

6. Experimental Attrition: people generally join research studies because they expect that they may benefit from the treatment they receive. If they suspect that they are in the placebo group, they are more likely to drop out of the study. This can influence the study results so that the sicker patients who are not finding benefit with the placebo drop out, leaving the milder cases to try to tease out their response to the intervention.

7. The Placebo Effect: I saved the most important artifact for last. The natural tendency for study subjects is to perceive that a treatment is effective. Previous research has shown that about 33% of study subjects will report that the placebo has a positive therapeutic effect of some sort.

In my opinion, the often-missing ingredient in Health 2.0 is the medical expert. Without his/her critical review and educated guidance, there is a greater risk of making irrelevant tools or perhaps even doing more harm than good. Let’s all work closely together to harness the power of the Internet for our common good. While research minus science = gossip, science minus patients = inaction.

Posted in: Public Health, Science and Medicine, Science and the Media

Leave a Comment (11) ↓

11 thoughts on “Research, Minus Science, Equals Gossip

  1. LovleAnjel says:

    The hot flash tracker sounds suspiciously like one of those ‘take this quiz to find out if you need help with depression’ boxes on pharmaceutical websites.

  2. tmac57 says:

    Dr. Jones,
    Can you think of a positive way that medicine can utilize the new tools?

  3. Recovering Cam User says:

    I agree with much of what you are saying. But let me offer an example as to why this issue isn’t going away anytime soon.

    I have an endocrine dysfunction whose cause is unknown. In the last year, I’ve seen five different MD’s, none of whom agree with each other about what the optimal course of treatment is. There is some overlap, but because the drugs generally used to treat my condition are cheap generics, there is no financial motivation for anyone to do the kinds of studies that would resolve these differences of opinion within the conventional medical community.

    Because of this, I have turned to patient groups and websites for additional information. I’ve learned enough from reading books like Bausell’s and blogs like this one to recognize the limits of the information I get there. But I have also gained valuable information from other patients that in several cases turned out to be more accurate than what I got from my doctors.

    Also, seeing the difference of experiences between actual patients on various drugs also helped me understand that people respond very differently to different drugs and that I would have to go through my own (doctor guided) process of trial-and-error to discover what course of treatment was optimal for me. The patient websites were the best resource I found for details on when to know if a drug was not working or causing more harm than good.

    I don’t think patient web sites should replace conventional medical wisdom. But, in some cases, they can be a useful adjunct for an educated patient.

  4. I don’t disagree with your overall sentiments but could I please raise a polite objection to the statement:

    “Previous research has shown that about 33% of study subjects will report that the placebo has a positive therapeutic effect of some sort.”

    While there has undoubtedly been studies that have shown this magnitude of effect and many that have shown even more I think it is erroneous to suggest that this is the average effect. If you will excuse the link I blogged very recently about this the aggregated effect is usually much less on average. I know SBM has covered this in the past too. I regard this oft quoted 30-33% effect has something of a placebo myth.

    I just feel that the placebo effect is often liable to be over-stated and the science does not necessarily support the case. I do whole-heartedly agree that it is a significant problem and has to be considered in any Health 2.0 model.

  5. shanek says:

    That isn’t what the Wisdom of Crowds is.

    Let’s say there’s a contest to guess the number of jelly beans in a jar, as a fundraiser. You pay however-much and make your guess. The thing is, if they get enough guesses, then they can average everyone’s guesses together and get a result MUCH closer to the real value than the winning guess. The more guesses you get, the closer the average converges onto the real value.

    This works because the information about the number of jelly beans–how big a jelly bean is and the size of the jar–are all available to the person who makes the guess. For everyone who guesses low, someone else will guess high.

    The first person to realize this was Charles Darwin’s cousin Francis Galton, when he examined the guesses at a fair contest to guess the weight of an ox, which was then butchered and weighed. A LOT of research has been done on the subject since then.

    The Wisdom of Crowds is excellent at setting prices, driving supply and demand, adjusting interest rates, and setting exchange rates, that sort of thing. It is NOT good at evaluating scientific results.

    I have a video on the subject here: http://www.youtube.com/watch?v=gKd_yG8DrkE

  6. tmac57 says:

    northerndoctor,
    I read your blog on the placebo, and I think I get your point, but if you could clarify, are you saying that the whole circumstance of how a clinical trial is presented to the patient has an additive effect beyond just handing a person a placebo, and seeing if helps?
    And if that is true, then might the various ways in which a patient experiences treatments have a greater or lesser effect?

  7. The Blind Watchmaker says:

    If the most popular treatments were the best, then we should all take antibiotics for every runny nose.

    And now for our featured attraction….MRSA!!!

  8. mandydax says:

    I’m reminded of the experiment where a group of people were told that they would each be given a mild sedative or a mild stimulant or a placebo. They were allowed to interact with each other while waiting for the drugs to kick in, and then they were asked which of the three they thought they were given. Everyone was actually given a placebo, but many of the group started becoming drowsy or hyper, and reported that they thought they were given an actual drug. I can’t seem to find it at the moment, but I think it might have been one of James Randi’s experiment/demonstrations.

  9. pmoran says:

    “I just feel that the placebo effect is often liable to be over-stated and the science does not necessarily support the case. I do whole-heartedly agree that it is a significant problem and has to be considered in any Health 2.0 model.”

    The placebo impacts upon many matters of interest to the medical skeptic yet it remains a very difficult thing to pin down.

    It is clear, as Val indicates, that the apparent benefits seen in the placebo arm of controlled trials involving subjective end points, and also from placebo medicines within similar clinical settings, are due to a composite of many categories of phenomena. Everyone has their own list, but mine would include —

    1. spontaneous changes in level of symptoms (including reversion to the mean), 2. biased patient reporting (answers of politeness and experimental subordination), 3. Memory problems — baseline pain can be remembered as being much more intense than it really was. 4. coincidental behavioral changes (such as walking less when that hip is painful), 5. other beneficial non-specific influences of medical interactions (explanation, reassurance, being given a sense of having control, etc) and 6. almost certainly. *true* placebo responses meaning clinically desirable changes in symptom levels and illness behavior from simply being the recipient of a credible treatment ritual. (The last two could be merged. if you like).

    The complexities don’t end there. The truth is that we have no idea what no 6 is capable of. The clinical studies are of little help, and not merely because of all these confounders. We expect “true” placebo responses to be extremely variable in frequency and strength depending upon many factors to do with the therapist, the kind of placebo treatment used, and another series of factors to do with the world view and circumstances of the patient. Yet in placebo controlled clinical trials the subjects are purposely kept in the dark even as to even whether they are supposed to get better or not.

    Under such confusing circumstances we perhaps need to go along with what the evidence permits, rather than be too committed as to what it shows. We need to be especially wary of bias as it is very convenient for the medical skeptic to dismiss the placebo as of no significance.

  10. I agree the circumstances are confusing and I take your point about what the evidence permits. However, I would also highlight that I probably wouldn’t be willing to accept that argument when applied to CAM so I am disinclined to with placebo.

    I should be clear though – I don’t dismiss placebo as being of no significance; I am just not convinced the size of the effect is as big as is often quoted.

    tmac57 – yes, I did speculate that clinical trials might present a unique manifestation of the placebo effect because of their effects on expectation. And yes, I think that would be a fair extrapolation about patient experience. However, I do not think that necessarily can lead to a justification of CAM, for example.

    I must apologise to Val – I enjoyed this post and I think it raises some excellent points. I feel a discussion of placebo could be getting a bit off-topic so I will stop.

  11. pmoran says:

    Val did describe the placebo effect as “the most important artifact” in medical interactions, so I don’t think we were hijacking anything.

    Anyway, here is something relevant to the “people are stupid” quotation above, a theme that seems to pervade this blog:-

    One of the relevancies of the placebo effect is that if we accept even the possibility that it has significant benefits for some patients under some circumstances we may be a tad less inclined to be regarding use of the placebo medicines of CAM as a sign of public folly. Access to safe placebo influences (wherever they may be) might even prove to be making up for some of the deficiencies of science-based treatments at this point in medicine’s evolution.

    Remember also that the relevant science involves highly sophisticated, high-level analyses that most people never have to consider in their whole lives and will likely never grasp in a million years (mainly because, unlike us, they don’t care enough — it is not worth the effort for them — it is far simpler to just try out any proffered treatment and “see if it works”).

    So I wonder if it is a healthfrauder delusion that what is needed is a better public grasp of science or even whether the science matters for much of the CAM phenomenon.

    Use of “alternatives” probably has less to do with scientific judgements than with where people are prepared to apportion their trust when faced with unmet medical needs. We won’t win back trust or influence if we seem to be despising people for doing what they are well-programmed to do and what most do judiciously (I refer to the fact that most AM use occurs alongside “proper” medicine or after giving it a trial).

    Might as well be “in for a sheep — “. :-)

Comments are closed.