In part I of this series I discussed clinical pathways – how clinicians approach problems and the role of diagnosis in this approach. In part II I discussed the thought processes involved in deciding which diagnostic tests are worth ordering.
In this post I will discuss some of the logical fallacies and heuristics that tend to bias and distort clinical reasoning. Many of these cognitive pitfalls apply to patients as well as clinicians.
Pattern recognition and data mining
Science, including the particular manifestation we like to call science-based medicine, is about using objective methods to determine which patterns in the world are really real, vs. those that just seem to be real. The dire need for scientific methodology partly results from the fact that humans have an overwhelming tendency to automatically sift through large amounts of data looking for patterns, and we are very good at detecting patterns, even those that are just random fluctuations in that data.
This is the second in a brief series of posts about how clinicians think. My purpose here is to elucidate how skeptical principles apply to clinical decision-making, but also as background to provide context to many of the articles we publish here. In this installment I will review the factors that clinicians consider when deciding what tests to order for screening and when conducting a diagnostic workup.
The gunshot approach
Last week I discussed the “Dr. House” approach to medicine, using that particular TV character as an example of how medicine is often portrayed in fiction. Another aspect of the Dr. House image that is very misleading is his approach to diagnosis, which tends to be very linear. He decides what the most likely diagnosis is, then proceeds to either treat that entity or order a confirmatory diagnostic test. When that diagnosis fails, he then proceeds onto diagnosis B. A string of such failures then culminates in a flash of brilliance that allows him to make the actual obscure diagnosis and cure the patient. This approach is optimized for storytelling and drama, but is not how actual clinicians operate.
At the other end of the spectrum is what doctors often refer to as “the gunshot approach” – test for everything in hopes that you hit something. Another derogatory term that doctors throw around is “a fishing expedition,” referring to a diagnostic approach that amounts to hunting around for any possible diagnosis without having a real justification.
I practice in a university clinic which functions partly as a tertiary referral center, which means we get referrals from other specialists. I also get many referrals for second opinions. Sometimes the entire cause for the patient’s desire for a second opinion, it seems to me, is the simple fact that they did not understand the reasoning of the previous specialist. They were given a diagnosis and a course of treatment, but not an explanation of how their doctor arrived at those conclusions.
I am not being judgmental – different practices are under different pressures and time constraints, and it can be very difficult to gauge a patient’s understanding. Often the physician and the patient are proceeding based upon differing assumptions and narratives that are not expressly stated. The doctor may think they have explained the situation entirely, but simply did not confront misleading assumptions they were not aware their patient had.
This is part of the advantage of engaging the public about health issues and confronting pseudoscience, myths, and misconceptions – you develop a deep awareness of how the general public thinks about medicine.
“I intend to live forever. So far, so good.”
- Steven Wright
The humor in many of comedian Steven Wright’s famous one-liners is that they are simultaneously familiar and absurd. At some level we all know that we are going to die, but as long as we are still alive (or a loved-one is alive) we can cling to the irrational hope, the impossible denial, that death remains a distant abstract concept, not an near inevitability.
We all need to come to terms with death in our own private way, but often those terms are not private because they drive our use (for ourselves or others) of increasingly expensive health care. Two essays over the last year by doctors explored this issue, noting that when doctors face their own mortality they often make different health care decisions for themselves than the general public.
In February of 2012, Dr. Ken Murray wrote an essay in The Wall Street Journal – Why Doctors Die Differently. His primary thesis was that doctors choose less end-of-life care for themselves than the average patient. They do so largely because they are intimately familiar with the futility of much of what we do for patients who are likely going to die anyway. As one example, CPR has a success rate of about 8%, with only 3% of people receiving it going on to have a near-normal quality of life. Those numbers are pretty grim. Meanwhile, TV depictions of CPR are successful 75% of the time with 67% returning to normal life. Sometimes the person wakes up during the CPR, is fine, and then goes on to thwart a terrorist attack without missing a beat.
[NEW POSTS JUST BELOW THIS POST]
I am happy to announce that Science-Based Medicine has published three e-Books:
A recent study published in the Proceedings of the National Academy of Sciences calls into question the standard mouse model of sepsis, trauma, and infection. The research is an excellent example of how proper science investigates its own methods.
Mouse and other animal models are essential to biomedical research. The goal is to find a specific animal model of a human disease and then conduct preliminary research on the animal model in order to determine which research is promising enough to study in humans. There are also non-animal assays and “test tube” type research that is used to screen potential treatments, but scientists still prefer a good animal model.
It is also understood that animal models are imperfect – mice are not humans, after all. Animal research is therefore not a substitute for human research. I and other SBM authors have regularly criticized proponents of dubious treatments who make clinical claims based upon preliminary animal research. Until something is studied in humans, we cannot make any reliable claims about its safety and efficacy in people.
Snake oil often resides on the apparent cutting edge of medical advance. This is a marketing strategy – exploiting the media hype that often precedes actual scientific advances (even ones that don’t eventually pan out). The slogan of this approach could be, “Turning tomorrow’s possible cures into today’s pseudoscientific snake oil.”
The strategy works because, to the average person, the claims will sound plausible and scientific and will contain familiar scientific buzz words. There is therefore a proliferation of stem cell clinics, anti-oxidant supplements, and personalized genetic medicine.
We can add to the list of cutting edge pseudoscience, neural plasticity and brain training. Neuroscientists are discovering that even the adult brain has greater capacity for plasticity than was previously thought. Plasticity is the capacity of the brain to rewire itself, to acquire new abilities or compensate for damage. Mostly this is simply a technical description of a very common phenomenon – learning. Shoot a basketball 1000 times and (surprising to no one) you (meaning your brain) will get better at shooting baskets. Some of this is physical, such as developing the necessary strength in the involved muscles, but mostly this is the brain learning how to shoot baskets through plasticity.
“Medicine is a very religious experience. I have my religion and you have yours. It becomes difficult for us to agree on what we think works, since so much of it is in the eye of the beholder. Data is rarely clean. You find the arguments that support your data, and it’s my fact versus your fact.”
- Mehmet Oz
The above quote is from a recent article for the New Yorker by Michael Specter about Dr. Oz, the most currently popular TV doctor. Specter described this sentiment as “chilling.” To me it sounds like a manifesto – a postmodernist attack on the scientific basic of modern medicine.
In my experience, this sentiment is often at the core of belief in so-called complementary and alternative medicine (CAM). In order to seem respectable and infiltrate the institutions of medical academia, proponents of CAM will say that their treatments are evidence-based and that they are scientific. They have a serious problem, however – their treatments are not evidence-based and are often grossly unscientific. Whenever someone bothers to look at their evidence and examine their science, therefore, they start to backtrack, eventually arriving at their true position, a postmodernist dismissal of science resembling Oz’s statement above. I have heard a hundred versions of the Oz manifesto from CAM supporters.
In 2010, following the H1N1 pandemic and the vaccination campaign to reduce its impact, researchers noted a significant increase in a rare neurological disorder, narcolepsy, in Sweden and Finland. Since then researchers have been studying a possible association between a specific H1N1 flu vaccine, Pandemrix by Glaxo-Smith-Kline (GSK) and a sudden onset of the sleep disorder narcolepsy. In those two countries the association seems strong, but the full story is still complicated with many unknowns.
Narcolepsy is a neurological disorder marked by excessive sleepiness, cataplexy (sudden loss of muscle tone, usually triggered by emotions) and disordered sleep. Almost all cases are associated with low levels of hypocretin in the hypothalamus – this is a hormone involved in sleep regulation. Further there is a strong HLA (human leukocyte antigen) association – specifically DQB1*0602. HLA is a group of proteins involved in regulating immune activity. An HLA association strongly suggests that narcolepsy may be an auto-immune disease.
The current synthesis of this information is that narcolepsy occurs in genetically susceptible individuals after some environmental trigger, such as in infection, that causes the immune system to attack and destroy hypocretin cells in the brain.
Science journalist Sharon Begley wrote a recent piece in The Saturday Evening Post about Placebo Power. The piece, while generally better than the typical popular writing on placebos, still falls into the standard placebo narrative that is ubiquitous in the mainstream media. The article is virtually identical to a dozen other articles I have read on placebo effects in the popular press, and most significantly fails to even question that narrative.
Begley is generally one of the better science journalists, although I have had my disagreements with her – specifically over her attitude toward the relationship between skeptics and the media. She seems to have a distorted and negative view of skeptics and does not think that the media can or should help us in our “debunking crusade.” (The term itself speaks of a fundamental misunderstanding of the modern skeptical movement.)
I have also parted ways with Begley over her view of the relationship between science and medicine. She seems to have a fairly negative view of doctors, fueled in part by her imperfect grasp of medical science. This is the risk with even the best lay science journalists – science is often complex and it is difficult to master the nuances if you are not an expert and steeped in the evidence and the community. Further there is a tendency for people in general (including journalists) to go along with an appealing and available narrative. (For journalists those narratives that are appealing are the ones that make good headlines.) These shortcomings are present throughout her recent article on placebos.