Articles

Applying Rigorous Science to Messy Medicine

The PowerPoint presentation that I gave at the Skeptic’s Toolbox workshop at the University of Oregon on August 7, 2009 is up on their website with the complete text of what I said. The theme of the workshop was scientific method. The title of my talk is “Tooth Fairy Science and Other Pitfalls: Applying Rigorous Science to Messy Medicine.” Click here for the link. It covers a lot of things that are pertinent to the subjects we discuss on this blog, and I thought some of our readers might enjoy it. I put in a lot of information and some good cartoons. Note: this was a talk to the general public, not an academic presentation, and it does not include citations or references.

Posted in: Science and Medicine

Leave a Comment (21) ↓

21 thoughts on “Applying Rigorous Science to Messy Medicine

  1. kausikdatta says:

    What a wonderful presentation! I wish I had been there. It is very informative and well-rounded. I am going to save it for my reference in future.

    On another note, a woman pilot’s purse may not be hanging from the instrument panel, but on Baltimore streets, I have seen women drivers – at least on three separate occasions – driving, with both their hands off the steering wheel, using the rearview mirror (or the mirror behind the sunshade) to apply mascara, lip-stick or other items of make-up. WHILE driving – let me ‘drive’ that point home. :D

    I hurriedly put a mile between me and that car on all such occasions. It so happens that I have been rear-ended a few times (twice at a red-light when I was standing) by women drivers, never by men. Just sayin’…

    On the flip side, a few of the most perfect aircraft landings (smooth, no jerk, no sudden stumble) that I have experienced have been with women pilots!

  2. Harriet Hall says:

    Women and their makeup; men and their toys. My daughter was stopped at a red light and was rear-ended by a young man who was too busy looking at his handheld GPS to look where he was going. It totalled her car. A cop saw the whole thing.

  3. daedalus2u says:

    Very nice presentation. I think there are individuals that as a class have a more “skeptical” mindset by nature, more literal and logical in their thinking; that class would be people on the autism spectrum. I suggests that the autism spectrum is part of a complex continuum of thinking styles as determined by brain structures, mostly laid down in utero and early childhood as trade-offs in neurodevelopment.

    Because the maternal pelvis is limited in size, the size of the infant brain that can be successfully born through that pelvis is also limited. A large brain is essential for the two characteristic human traits, language with grammar and syntax and tool making and tool using ability. Many organs are epigenetically programmed in utero including the heart, vasculature, liver, neuroendocrine system. It would be beyond surprising if the most important organ, the brain was not epigenetically programmed in utero.

    I think the fundamental trade-off is between a “theory of mind” (which two individuals must share to be able to communicate), and a “theory of reality” (which is used for thinking about actual reality). The relative trade-offs of these two fundamental ways of thinking are not completely compatible. A too strong “theory of mind” tends to make type 1 errors (false positives) and see communication and human-type motivations where it isn’t, as in anthropomorphic projection of human-like attributes onto non-human entities and even objects. A too strong a “theory of reality” may make a type 2 error and be unable to understand human communication.

    I have a whole blog post about this.

    http://daedalus2u.blogspot.com/2008/10/theory-of-mind-vs-theory-of-reality.html

    I think the “theory of mind” is the mechanism behind the power of a charismatic authority figure to impose belief in the absence of facts and logic. The individual has made a type 1 error, a false positive. That is an inherent problem with pattern detection systems. That is what the illusions like the impossible box invoke, a false positive. There is some research by Laurent Mottron that shows that people with ASDs are more resistant to impossible figures in that the fact that they are impossible doesn’t slow down their copying speed as much as with non-ASDs.

  4. daedalus2u says:

    I just found a very nice paper

    http://www.perceptionweb.com/perception/editorials/p6449.pdf

    Discussing a recent paper which I have not seen and relates to the drawing of impossible figures by people with autism and those without.

    One of the controversies in autism research is the question of whether there is a “deficit” or not, and what that “deficit” comprises. There are some who subscribe to the “lack of central coherence” which postulates (more or less), that people with autism are focused on the trees and so can’t see the forest, and the only reason they can focus on the trees is because they can’t focus on the forest.

    It is pretty well established that there is a higher degree of sensory discrimination, higher resolution and better hidden object finding in autism. The question is, does that higher discrimination come at the expense of a lack of ability to cohere the details into a big picture. Mottron’s research seems to say no, it doesn’t. This research seems to confirm that. The prediction of the “lack of central coherence” hypothesis predicts that people with autism will use a more local drawing strategy than will people without autism. That appears to not be the case. The last paragraph is worth reading (I have copied only the last few sentences here).

    ”The results here suggest that, when these two aspects of autistic perception are in conflict, the tendency to be less affected by concepts may take precedence. Ironically, this may lead individuals with autism to use a strategy that is more `global’ than that characteristically adopted by those without the condition.”

    My interpretation is that there is a reduced tendency to adopt a type 1 error, that is that people with autism are less susceptible to the “believing is seeing” meme. They can still see the forest, but can distinguish between different kinds of forests, even forests that don’t look like any other kind of forest that has been seen before. The subtlety of whether the forest is composed of oaks or pines or maples might be missed by someone without autism, but would be less likely to be missed by someone with autism. It isn’t that people with autism can’t cohere the sensory data into a big picture, but that they have greater flexibility in what that big picture is; in this example a 3-D abstraction or a 2-D collection of lines.

    I think this research pretty much refutes the “lack of central coherence” hypothesis. Not that it will cause its proponents to drop it. The Max Planck quote is appropriate and reflects the difficulty that people have in changing how they think about things.

    ”A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grow up that is familiar with it.”

    It is the “big picture” of the old scientific paradigm that the opponents of the new truth can’t let go of, and until they let go of it they can’t see the new truth.

  5. Tsuken says:

    Very nice, Harriet. In fact I wonder if you’d mind terribly me nicking a few examples from your presentation, for something a little similar I’m doing this Friday??

  6. Harriet Hall says:

    Tsuken,

    Feel free to use anything you want from my presentation. That’s why I posted it – to share.

  7. Dr Benway says:

    Very nicely organized talk. I think the audience will remember key points, such as the difference between relative risk and absolute risk.

    One minor error, I think: 2 is 100% greater than 1 not 200% greater.

  8. nitpicking says:

    I’ve been rear-ended twice, once by a man, once by a woman. Neither was using any gadgets or putting on makeup. One was elderly, one a teenager.

    Anecdotes are worth the glowing phosphors that depict them.

  9. Ed Whitney says:

    Very nice presentation for general audiences. There are three issues that I would wonder about, though.

    1. EBM and SBM are not appropriately conjoined by “vs.” as I see it. Did David Sackett or any other authority on EBM ever say that it should simply accept published clinical findings? Sackett called EBM the integration of individual clinical expertise (which means all of the “SBM” stuff plus the wisdom that comes from having seen a jillion patients). This is a novel concept of EBM and I would not accept it without challenge.

    2. Bradford Hill never proposed “criteria” for causation in his classic paper. Rather, he called them “viewpoints” to be considered when decisions need to be made regarding environmental or occupational hazards: “Here then are nine different viewpoints from all of which we should study association before we cry causation. What I do not believe – and this has been suggested – that we can usefully lay down some hard-and-fast rules of evidence that must be obeyed before we can accept cause and effect. None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?” The use of the term “Bradford Hill criteria” has become habitual, but it is not quite what Hill had in mind.

    3. The slide on Simpson’s paradox has three rows, representing different levels of placebo responsiveness for symptoms A, B, and C. For A, the condition with the highest placebo response rate, 30 patients receive placebo and 50 patients receive homeopathy. For B, the condition with the lowest placebo response rate, 30 patients receive placebo and 20 receive homeopathy. Randomization is unlikely to produce imbalances this great in the direction needed to make Simpson’s paradox a sufficient explanation of an imbalance of the magnitude shown at the bottom. If a systematic review of randomized trials of homeopathy found an advantage over placebo, the reasons must be sought elsewhere.

    Also, I was most interested in the slide on the Cochrane reviews and the potential for bias. This would warrant a full discussion (or multiple discussions) on the SBM website. I have experienced certain vexations with Cochrane myself. Tricco AC et al (J Clin Epidemiol 2009;62:380-386) reported that non-Cochrane reviews with a meta-analysis of the primary outcome were twice as likely to have positive conclusions as Cochrane reviews with such an analysis. Cochrane reviews had a lower rate of agreement between results and conclusions than non-Cochrane reviews. Any insights about the Cochrane review process would be most welcome to me.

  10. gill1109 says:

    @Ed Whitney: Simpson’s paradox is indeed impossible in a randomized trial (more precisely: you would have to be extremely unlucky; but anyway, that bit of bad luck is covered by your 5% risk of incorrectly rejecting a true null hypothesis; i.e., you are insured against it). That is a mathematical fact.

    @Harriet Hall: wonderful presentation. I just discovered it in a roundabout way.

  11. Harriet Hall says:

    (1) EBM as originally envisioned should have been the same as SBM. Unfortunately it has been corrupted in common use. Too many people think all it means is that you have find a clinical study to show as evidence. That’s why we established this blog. My post for tomorrow will show an example of something that was promoted as EBM in a major journal but that was not SBM.

    (2) I think my example of all the ways science looked at smoking was a good example of how Hill’s list can be employed to good effect, whether you want to call them criteria or “things to think about.” They are called criteria in secondary sources. They are criteria for thinking, not a formal checklist that results in yea or nay. I don’t think that Hill would have disputed that the entire body of evidence for smoking and lung cancer adds up to indisputable evidence.

    (3) I don’t know why the trials of homeopathy were reportedly positive overall but not positive for any disease. Simpson’s paradox is one way that can happen, and I put it in because I thought it was a neat example of how statistics can be used to lie. If you can think of another explanation, I’d love to hear it.

  12. Ed Whitney says:

    @ gill1109: I got a risk of 0.0165 for having 30 of 80 patients assigned to placebo, assuming a binomial distribution. Rather than Simpson’s paradox, I think that various forms of assessment bias (the usual suspects) are more likely to account for the reported effects. If response rates are reported (successes in the numerator and number of patients in the denominator), then Simpson’s paradox would not present a problem.

    One other question about Cochrane: Does anyone know why in the world they present page after page after page of “pooled results” when they have one and only one study for the particular result in question? I suppose they have their reasons, but it makes the reviews inconvenient and unwieldy. It looks pretty silly to have the effect size for one study, e.g., “Smith 2006″ in one line, with the “Total” effect size on the bottom, identical to the result for Smith 2006!

  13. Joe says:

    Thanks for this post.

  14. Tsuken says:

    One more point: about NNT and NNH. You’ve said “When you use Tylenol for post-op pain, you have to give it to 3.6 patients for one to benefit”. Clinically at least, I think it’s important to add “additional”, as in “… 3.6 patients for on additional person to benefit”.

    An NNT of say 5 for an antidepressant doesn’t mean that only 1 out of every 5 patients who take it will have improvement of their depression; rather it means that only 1 will improve because of the direct effect of the medication. A patient could easily see itas the former: as indicating an 80% fail rate, rather than the usual psychiatric RCT’s 60-odd% success rate (even if only 1 in 5 of those successes are due to the direct effects of the pill).

  15. Harriet Hall says:

    Tsuken,

    NNT is the number of patients that need to be treated for one to benefit from the treatment (as compared with a control in a clinical trial). That doesn’t mean patients can’t improve for other reasons. I thought that was clear but I suppose misunderstandings could arise.

  16. Ed Whitney says:

    Harriet:
    Ludtke and Rutten (J Clin Epidemiol 2008;61:1197-1204) reported that meta-analysis results of homeopathy depended on the set of analyzed trials; since there is considerable heterogeneity in trial results, the summary effect size reported in a meta-analysis can depend on the selection of trials that are entered into the pooling of results. Interesting paper that applies to other problems in meta-analysis.

  17. Diane Jacobs says:

    Harriet Hall -> “Feel free to use anything you want from my presentation. That’s why I posted it – to share.”

    Oh, it’s going to get shared, better believe it. Good good stuff. Thank you for letting it be shared. It’s at a very appropriate level for a lot of those who need the content. :)

    Diane

  18. Tsuken says:

    Harriet,

    I certainly had trouble through my training, grasping exactly what NNTs did – and did not – mean. As is often the case in medicine, we need to be able to hold two quite different characteristics of the same thing in our minds: scientifically it does mean that the treatment only got one person better, but clinically it doesn’t mean that only one person will get better out of those taking the treatment. They aren’t in fact contradictory statements, but superficially they might appear so.

    Oh, and I did figure that you posted your presentation for others to use, but just thought I’d check. 8) I grabbed some stuff last night and it’s working in well – thanks very much ^_^

    Cheers,

    Raphael

Comments are closed.