Articles

Clinical Decision-Making Part III

In part I of this series I discussed clinical pathways – how clinicians approach problems and the role of diagnosis in this approach. In part II I discussed the thought processes involved in deciding which diagnostic tests are worth ordering.

In this post I will discuss some of the logical fallacies and heuristics that tend to bias and distort clinical reasoning. Many of these cognitive pitfalls apply to patients as well as clinicians.

Pattern recognition and data mining

Science, including the particular manifestation we like to call science-based medicine, is about using objective methods to determine which patterns in the world are really real, vs. those that just seem to be real. The dire need for scientific methodology partly results from the fact that humans have an overwhelming tendency to automatically sift through large amounts of data looking for patterns, and we are very good at detecting patterns, even those that are just random fluctuations in that data.

Many cognitive errors that plague any attempt to investigate the world (including clinical investigations) are some variation on the basic concept of mining large amounts of data in order to find apparent patterns, and then assuming the patterns are real or using confirmation bias to reinforce the perception that the pattern is real.

We have written many times about this concept as it applies to research – making multiple comparisons, sub-group analysis, publication bias, applying various statistical methods, poor meta-analysis, and blatantly cherry picking studies are all ways to pick signals out of the noise. But this happens in the clinical setting as well. In fact it is more of a problem in the clinical setting which is by necessity anecdotal and often cannot benefit from blinding or repetition.

One way to mine data is to order a large number of tests. The normal range of test outcomes is usually determined by taking two standard deviations on the Bell curve of results from a healthy population. This means that 95% of the results will be in this range, and 5% of tests in healthy individuals will be outside the range. Order a battery of 20 tests (like a chem-20 blood test) and on average one will be abnormal by chance alone.

This example is fairly obvious and most clinicians learn about it as a medical student. There are more subtle forms, however. In a patient with a difficult diagnosis it is common for them to go through multiple diagnostic procedures in series, until something abnormal is found. It is then tempting to conclude that the abnormal test is causally related to whatever symptoms were being investigated. It is also possible, however, that the clinician simply kept ordering tests until they got a false positive.

The same process applies to treatment. A clinician (or a patient seeking treatment from multiple clinicians) may try one treatment after another until they finally find one that works – or until their symptoms spontaneously improve in which case they will attribute that improvement to the last treatment they happened to try.

A partial solution to the above pitfalls is to order confirmatory tests to follow up the positive, or in the case of treatment to stop and then restart treatment to see if the beneficial effect goes away and then returns.

A great deal of data mining also occurs in the process of taking a history, or in the patient’s history itself. Patients, for example, will often search their memory for anything that may have caused their symptoms. They generally employ open-ended criteria (trying to think of anything interesting, rather than a specific cause), and underestimate the noise in their everyday lives.

The result is that it is almost always possible to think of something that happened, some exposure, some minor trauma, a stressor, a life change, a new environment – and then assume a causal relationship. Confirmation bias then kicks in to reinforce this belief, and that leads us to the next section.

Mechanisms of confirmation bias

Confirmation bias is a general term that refers to cognitive processes that tend to reinforce beliefs we already hold. This is mostly thought of as remembering hits (confirming information) and forgetting or dismissing misses (contradictory information).

The fallibility of human memory also aids tremendously in confirmation bias. Patients will not only differentially remember information that reinforces their narrative, their memory will become progressively contaminated and distorted to further reinforce it.

One example of this is anchoring. We have a poor memory for how long in the past an event happened. We tend to “telescope”, which means underestimating how long in the past an event was. When a patient says they had their symptom for 1 year, it is likely that it was really 2-3 years.

Patients, however, may also anchor one event in time to another event. This may be accurate and therefore helpful if they are anchoring to a public event that fixes their memory in time. Just as often, however, the anchoring is false, an artifact of their evolving narrative.

For example a patient may have mentally anchored the onset of their headaches to a minor car accident they had, because they have come to believe that the car accident is the ultimate cause of their current symptoms. This is not an unreasonable hypothesis, but the details of the patient’s memory over time will change to fit this story. After a year of telling their story to different doctors they may report that they were perfectly fine and then immediately after the accident all their symptoms began. Meanwhile their medical records may indicate that they were complaining of headaches a year prior to the accident.

That, of course, is the reason that clinicians need to keep obsessive records. They are a remedy to the vagaries of memory. This is also why anecdotal case reports are so unreliable.

Placebo effects also play into confirmation bias. A treatment that addresses a patient’s narrative is more likely to evoke a positive placebo response than one that does not, and then will be taken as confirmation that the narrative is correct.

The representativeness heuristic

There is a general cognitive tendency to estimate probabilities more based upon how well something fits the typical characteristics of a category than the base rate of that category.

The classic experiment presented subjects with a character profile of a college student – nerdy, good at math, likes computers, likes to spend time working alone – and then asked the subjects to estimate the probability that the student was an engineering major. Most people ranked the probability very high, because the profile was representative of a typical engineering student.

However, only 1% of students at the college are engineering students (the base rate), and when this is taken into consideration it is much more likely that the student is not in engineering.

This reasoning also applies to diagnosis. Clinicians tend to be highly confident in a diagnosis when the patient and their symptoms are highly representative of the typical presentation. This is reasonable as far as it goes, but is incomplete and therefore flawed reasoning. You also have to consider the base rate of the disease in question.

This results in what is called the zebra diagnosis. Medical students are famous for this fallacy – coming to the conclusion that a patient has a rare disease because they have some typical features. They have not yet learned through experience that rare things are rare, and common things are common. When you consider the base rate, even a typical presentation of a rare disease may not be the most likely diagnosis. An atypical presentation of a very common disease may be far more likely.

Rather than thinking in terms of representativeness, clinicians are better off thinking in terms of base rates and predictive value.  A symptom, for example, may be very typical and therefore representative of a particular illness (for example, fatigue is a typical presentation of chronic fatigue syndrome). But that symptom may also be present in the healthy population or with many other diseases. The presence of that symptom may therefore not be very predictive of the particular diagnosis.

Another way to look at this is that some symptoms are more specific to certain diagnoses, while others are non-specific. Every week I have patients come into my office with a list of symptoms they pulled off the internet, convinced that they have a rare or uncommon deadly disease. They are almost always wrong, because they have fallen for this cognitive error. They don’t have the background knowledge to know which symptoms are predictive and which are not, and they are also not familiar with the base rates of various diseases.

In this example another effect also comes into play – the Forer effect.  This refers to the tendency to take a general description and then apply it to oneself, finding examples that confirm the description. This not only applies to horoscopes and psychic readings, but lists of symptoms on the internet.

The representativeness heuristic manifests in many important yet subtle ways in clinical practice. For example, if you are a woman in the emergency room having a heart attack you are less likely to be properly diagnosed and treated than if you are a man with the same presentation. This is because we are biased to think of a middle-aged man as the typical heart attack patient – they are more representative.

The toupee fallacy

You may think you can always tell if someone is wearing a toupee, but the problem with this observation is that you have no idea when you don’t recognize when someone is wearing a toupee. This fallacy applies to diagnosis as well. When you look for a disease you may be likely to find it, but you have no idea how often the disease is present when you don’t look for it.

This is not completely true, as the patient may seek out a second opinion or you may refer to a specialist who does make the diagnosis. (This is why it is critical to give feedback to clinicians who initially missed a diagnosis.) Or the disease may progress and the diagnosis may declare itself.

For many benign, self-limiting, or chronic stable conditions, however, a proper diagnosis may either not be possible with current technology, or may never be made. A clinician can convince themselves, however, of uncanny diagnostic acumen if they only look at the positive outcomes and never look at the data systematically.

Related to this is the congruence bias – the tendency to only test your own hypothesis, and not competing hypotheses. In medicine this manifests when taking a history and ordering tests, which are a way of testing your clinical hypotheses.

If, for example, you suspect that poor sleep is a major contributor to a syndrome, such as headaches, you may ask all your patients with headache about the sleep quality and find that most have poor sleep. This will seem to confirm your hypothesis. However, if you also asked your patients without headaches you may find a similar rate of poor sleep.

Conclusion

Both patients and clinicians are subject to a long list of cognitive biases and errors in thinking that conspire to confirm beliefs about the cause and effective treatment of symptoms and illness. This is why, without the anchoring of adequate scientific methods, and a healthy dose of applied skepticism, practitioners will tend to slowly drift into a world of quackery.

Any treatment can seem to work, and any made up disease may seem to be real, when we rely only upon our naïve reasoning. The pages of SBM are full of dramatic examples.

Other entries in this series

Part I
Part II

Posted in: Science and Medicine

Leave a Comment (15) ↓

15 thoughts on “Clinical Decision-Making Part III

  1. cervantes says:

    All true enough, but of course you’re focusing on difficult, puzzling or problematic situations. Our popular picture of medicine is that it’s all about making difficult diagnoses, thanks to TV; and of course NEJM features one every week, as if this is the most important challenge that physicians routinely face. Most clinical interactions are about dealing with well-established conditions, or fairly straightforward presentations. The challenge, usually, is not so much to figure out the mystery of what’s biomedically wrong with the person, as it is to figure out the best course of action given the inevitable risk, cost and benefit tradeoffs and the difficulty many people have in following medical advice.

  2. cervantes – I disagree. The factors that I discuss above are relevant to just about every history I take from a patient. It really is only a matter of degree.

    Every patient history I take is distorted to some degree, and is also biased by the questions I choose to ask and how I ask them. Also, even when treating straightforward issues we have the problem of knowing what tests to order, what the results mean, and whether or not our treatments work.

    I admit my own experience is biased since I am a specialist working in a university clinic. But I would not characterize these factors as only relevant to diagnostic dilemmas or particularly difficult cases. They are relevant to every case.

    Also – I think good and experienced clinicians know a lot of this, even if they are not aware of the more general skeptical principle, what it’s called, and how it applies outside of medicine. They just learned the “clinical pearl” and apply it to their practice. So, many of these factors are being accounted for even in a routine evaluation. But even excellent clinicians will commit subtle forms of the above cognitive errors, although they can be minimized by being explicitly aware of them and vigilant in monitoring for them.

  3. cervantes says:

    What I’m saying is that most of the time, your problem is not to diagnose the cause of the complaint — it’s pretty clear, for example, that the person has osteoarthritis, or an uncomplicated RTI, or is obese, or your labs show elevated h1Ac or whatever. What you need to diagnose is more about what the person wants, needs, understands, can do — it’s not about the Dr. House kinds of narratives. I’m talking about primary care, neurology is probably another matter.

  4. Janet says:

    I always dread the question: “So, how long has your shoulder (or whatever) been hurting?”

    I confess I usually pull something out of thin air, and then immediately start revising it. Does this information actually help diagnose whether I have arthritis, tendonitis, or rotator cuff? Maybe she is just making conversation?

    I suppose the length of time question has more meaning if the complaint is headaches or stomach or abdominal pain.

    Thanks for furthering my logical thinking education, but since coming to this blog I have found that I follow almost everything I say to my doctors with, “but I could be wrong…it might just be an anecdote/placebo/a bad study, etc.

  5. mousethatroared says:

    Janet ” I have found that I follow almost everything I say to my doctors with, “but I could be wrong…it might just be an anecdote/placebo/a bad study, etc.”

    LOL – Me too! And you know what? I have found in the last year or so, since I’ve been visiting doctors a lot more due to this connective tissue disease (or whatever) that I have, it’s generally NOT helpful…meaning my “I could be wrong, anecdote, etc” qualifications just seems to confuse the conversation. I’ve vowed not to do it anymore.

    It does seem helpful to jot down obvious symptoms on a calender. That way when a doctor asks how long something’s been bothering me I can say “two months” instead of umm, well I’m not sure, maybe it started last month? Since using this method, I’ve found I tend to underestimate these things.

  6. That is the most helpful thing – write down key bits of information, symptoms, treatments and side effect. Keep a diary. But don’t overwhelm it with lots of information, just the important stuff. Don’t rely on your memory.

    And (while I’m at it) – bring all your medication to your doctor visits, including supplements. And keep a copy of critical test results and bring them along too. If you have any chronic illness of any complexity, keeping a little folder or ring binder with symptom diary, medication list, and important study results is very helpful.

    Don’t bring a novel to your visit, however, because that will dilute out the important info with extraneous info.

  7. mousethatroared says:

    @Steven Novella – That is very helpful information for a patient. I try to keep my test results together in a folder, and keep a journal of symptoms (although deciding what to include or not is an evolving skill) but I have always thought that I would come across as a complete hypochondriac if I brought these things too a doctor’s appointment.

    Having to visit doctors for multiple complaints (fatigue, skin complaints, flank pain, shortness of breath, joint/muscle pain) over a relatively short period of time, I already feel very aware of how very much I must look like a hypochondriac.

  8. Aidan says:

    Janet, I just went through that exact shoulder diagnosis process a few weeks ago and had the same issue. I know its been bothering me for a while, but I have no idea how long. Could be 6 months could be years. The problem was that the more I thought about it the more I risked modifying my memory so now I have absolutely no idea when it started.

    My big issue is that I am fairly active so I constantly have injuries and its hard for me to remember which once had a slow gradual onset from overuse vs. which ones I specifically injured. I tend to ignore injuries because most are self limiting so I just work through them. But, then the occasional injury persists and I can never remember how it started. With my current shoulder injury I know I had an acute injury a few years ago but its hard to remember if that fully cleared up or if this is a remnant of that initial problem. Thus I end up making the same qualifications at the Doctors office as Janet & mousethatroared described. I think I’m going to start using your idea mousethatroared and keep a journal of my injuries.

  9. BillyJoe says:

    Janet,

    “I always dread the question: “So, how long has your shoulder (or whatever) been hurting?”
    I confess I usually pull something out of thin air, and then immediately start revising it. Does this information actually help diagnose whether I have arthritis, tendonitis, or rotator cuff? Maybe she is just making conversation?”

    I suppose if the pain started yesterday or a few days ago, she’s not going to be getting into a long detailed discussion about it or ordering any investigations or blood tests. You might just be after a day off work.

  10. WilliamLawrenceUtridge says:

    I made an explicit point of tracking some symptoms once (the healing time of cold sores using various medications, ointments and unguents). I was startled by two things – first, I had zero idea how long these things took; I had thought on the order of weeks when it was actually far less than that. Second, nothing seemed to make the cold sore go away faster (normally within 2 days at the most, it had withered) but applying pretty much anything to the wound let it heal faster and more comfortably (because it spent less time drying out, cracking and re-injuring the underlying tissues). So now I just apply generic lip balm with q-tips and I’m back to normal in about a week.

    As an exercise in self-correction however, it was fascinating. I was genuinely startled at how off my estimates and memory were. Another time I had an injury to my abdominal muscle which never seemed to get better…until I really thought about what it used to be like (at one point I had to bend double to sneeze, and it still hurt).

    It’s an exercise I think everyone should try once. A great way to confirm the failability of memory.

  11. mousethatroared says:

    @WLU – Like you experienced with you abdominal muscle, one thing that I find very difficult to track is when pain diminishes.

    Doctor’s often ask about morning stiffness and how long it lasts “Does it last for 20 minutes or over 1 hour?” But when pain decreases one doesn’t tend to notice it (particularly in the morning when most folks are busy). It’s usually when the pain increases that one notices it. So I can say “Yes I am less stiff in the late morning than early morning.” but give a timeframe in 20 or 30 minute increments? No way. I wonder who can?

  12. WilliamLawrenceUtridge says:

    I try to mark it to specifics. I remember having to double-over to sneeze, so I can anchor a specific degree of pain. I remember asking my mother to look at an injury in person, and I only happen to see her at specific times during the year. I try to test and remember range of motion for joints. I note which pants I can wear, and which ones I can’t, and when I could last wear them. Often all I can summarize is binary absolutes (definitely less than this day; definitely on this day; definitely fatter than the last time I wore these pants). The only time I undertook actual record keeping with any rigor was for the cold sores, but it definitely underscored how unreliable memory is. The absolutes, in addition to letting me note improvements or degradations, also confront me with this failibility, which in turn emphasizes how valuable it is to note those absolute.

    I’ve never been asked the “how long does it hurt” question. If I were and it were important, I’d use a stopwatch or my cell phone :) Only way to be sure, the human memory is utter drek.

  13. mousethatroared says:

    nah – I’m not going to use a timer. I’m inclined to think that if it’s not obvious*, my subjective observations would be too unreliable to base any judgment on anyway. Better to move on to the next question.

    *plantar fasciitis is obvious – you get up and the minute you hit your feet you say WTF!? – look at the clock when you stop hobbling. ;)

  14. Lemons says:

    This is why it is critical to give feedback to clinicians who initially missed a diagnosis.

    Related: the doctor who makes a med change should be the one who can follow the patient long enough to appreciate what that intervention achieved.

    Lots of people want to admit complex behavioral patients to residential programs or inpatient units to “have their meds adjusted.” I try to tell them that the doctor who knows them best and who will be with them for the next year or so should be the one to make med changes. But they look at me like I have two heads when I say this, as does the admissions department.

    That is the most helpful thing – write down key bits of information, symptoms, treatments and side effect. Keep a diary. But don’t overwhelm it with lots of information, just the important stuff.

    Tangential comment for the insurers out there: Please stop asking me to pay attention to stuff that is not relevant to my particular patient. Like making me do a substance abuse evaluation on a 13 year old non verbal autistic child living in RTFs the past 4 years. When you scare me into chasing after infos I don’t need for the sake of your computer database, my scare-o-meter goes off calibration. My natural “holy f_ck” reactions to things like a creatinine of 1.2 in a child on lithium doesn’t last long enough to make sure a repeat lab is done and reported in a timely fashion.

    Also, dearest insurers, I recognize that you will be mining your big fat database to prove to state governments that nothing really works so my patients ought to be left to fend for themselves, wherever that may be. Please don’t be like that though.

Comments are closed.