This is the second in a brief series of posts about how clinicians think. My purpose here is to elucidate how skeptical principles apply to clinical decision-making, but also as background to provide context to many of the articles we publish here. In this installment I will review the factors that clinicians consider when deciding what tests to order for screening and when conducting a diagnostic workup.
The gunshot approach
Last week I discussed the “Dr. House” approach to medicine, using that particular TV character as an example of how medicine is often portrayed in fiction. Another aspect of the Dr. House image that is very misleading is his approach to diagnosis, which tends to be very linear. He decides what the most likely diagnosis is, then proceeds to either treat that entity or order a confirmatory diagnostic test. When that diagnosis fails, he then proceeds onto diagnosis B. A string of such failures then culminates in a flash of brilliance that allows him to make the actual obscure diagnosis and cure the patient. This approach is optimized for storytelling and drama, but is not how actual clinicians operate.
At the other end of the spectrum is what doctors often refer to as “the gunshot approach” – test for everything in hopes that you hit something. Another derogatory term that doctors throw around is “a fishing expedition,” referring to a diagnostic approach that amounts to hunting around for any possible diagnosis without having a real justification.
A more optimal approach lies somewhere between Dr. House’s serial approach and mindlessly testing for everything. Doctors tend to take a layered approach, using various criteria to decide which tests are worthwhile, which tests have to be done, and which are not justified. Often there is a tiered approach; if round one of diagnostic testing does not yield a positive result, then progressively less likely diagnoses can be pursued. When this approach is persistently negative, then there is the trick of knowing when to stop – when further testing will yield diminishing returns.
Criteria for testing
Here are the various criteria that diagnosticians use to determine which tests should be ordered. Sometimes there are fairly strict algorithms determined by the standard of care, at other times doctors make a judgement about the cumulative impact of all of these factors combined.
Sensitivity and specificity: Sensitivity is the probability that a test will be positive if the patient actually has the diagnosis being tested for. Failure to test positive in someone with the target disease is called a false negative. Specificity is that probability that the patient has the disease if the test comes back positive. Testing positive in someone without the target disease is called a false positive. The more sensitive and specific an available test, the more useful it is.
How likely is the diagnosis? The probability that a diagnosis is present also dramatically affects how useful testing is. If the probability is very low then even a very specific test will be more likely to generate a false positive than a true positive. And of course, testing is more likely to give you an answer if you are looking for a diagnosis that is likely to be present.
What is the morbidity and mortality of the disease? It is more important to diagnose, or rule out, serious illness. There are, in fact, certain entities that we simply cannot afford to miss. Benign and self-limiting diseases, on the other hand, may not be worth diagnosing since they will get better on their own anyway.
How treatable is the disease you are looking for? The golden rule of diagnostic testing is this – how will the results of your test affect your management of the patient. If you don’t know the answer to that question, don’t order the test. You never want to be in a situation where you have an abnormal result and no idea what to do with it. You should have sorted that out before ordering the test.
How invasive, expensive, inconvenient, risky, or painful is the test? These factors get to the risk vs. benefit calculation of ordering a specific test.
Adding all these things together, it is clear that a doctor probably should not order an expensive, painful, highly invasive, and very nonspecific diagnostic test in order to diagnose a rare and benign entity that isn’t treatable anyway. We should order a simple and highly sensitive and specific test for a common, deadly, and curable disease.
These are two ends of the spectrum, and we encounter every possible permutation in between. Sometimes the standard of care demands a certain approach, at other times we are left to our own judgement. For example, in patients over age 50 who present with a new headache it is the standard of care to order a sedimentation rate as a screening test for temporal arteritis. Even though the diagnosis is unlikely (given everyone over 50 presenting with a headache), the test is a simple blood test, and the diagnosis is highly treatable and very severe when not treated, potentially leading to rapid and irreversible blindness. The test is highly sensitive but not very specific, so when it is positive it is usually followed up by a more invasive biopsy, which is highly specific.
On the other hand, when patients over 60 present with dementia the most likely diagnosis is Alzheimer’s disease (AD). We do not, however, perform any testing for AD, because at this time the only useful testing would be a brain biopsy, and this is not justified because it would not affect our management. Instead we order imaging, EEG, and blood tests looking for treatable causes of dementia, even those that are much less likely than AD. If the standard treatable causes are ruled out, then we make the diagnosis of “Alzheimer’s type dementia” and treat that symptomatically. AD is a pathological diagnosis and we cannot make it without tissue, which is not worth getting at this time. There is some benefit to having a tissue diagnosis for family history purposes, but this can be obtained at autopsy with no risk to the patient.
Another way to combine all these factors is to consider the overall risks vs. benefits of several clinical approaches. Making a specific diagnosis with a laboratory test is just one approach. Sometimes it is easier and better to simply treat a probable entity rather than test for it. If the treatment is fairly benign and effective, and the test is less so, sometimes treating without testing is the better approach. Sometimes the time it would take to get the results of the test are simply too long, and treatment decisions have to be made in the meantime.
There are also different clinical contexts for diagnostic testing. A diagnostic test (the context above) is performed on someone who is symptomatic and in whom there is reason to suspect the specific diagnosis. A screening test is performed on a population before they are symptomatic in order to either assess the risk of developing a disease or detect a disease very early in its course when it is more treatable or to prevent morbidity.
The topic of screening has been discussed many times on SBM. The counterintuitive point that often needs to be made is that more screening is not always better. It’s possible for the negative consequences of testing to outweigh the benefits. This threshold is usually determined by how likely it is for a positive screening test to be a true positive vs. a false positive. Even for a test that’s 99% sensitive and 99% specific (which is better than most diagnostic tests), if the prevalence of the disease being screened for is one person in 1000, and you screen 100,000 people, that will result in 999 false positive tests, and 99 true positives (with one false negative). The false positives greatly outnumber the true positives. You then have to consider what the response is to a positive screening test, which may be a more invasive follow-up test, or a treatment that has its own risks. You even have to consider the anxiety and stress produced by the false positive tests, if this is for a serious or stigmatized disease.
Target populations also have to be identified. Screening tests do not necessarily have to involve the general population. They can be targeted at high risk populations, determined by age, sex, family history, or other risk factors.
This discussion of criteria for diagnostic testing relates to Part I of this series in which I discussed the utility of making a diagnosis at all. It is not always necessary to optimally manage a patient. I find that patients, meaning the lay public, often assume that more testing is always better, and that making a highly specific diagnosis is a prerequisite to proper management.
The reality is that diagnostic testing is just another part of the risk vs. benefit calculation at the heart of all clinical decision-making. Often the most difficult decision to make is to decide which tests not to order. It is the general experience of doctors, backed up by published data, that specialists tend to order fewer tests. With greater experience and knowledge in their specific area, they are more likely to perform a targeted workup and avoid the “gunshot approach.”
This can often be difficult to explain to an anxious patient who wants a diagnosis. This can further place a great deal of pressure on the doctor to simply order the test (combined with liability to lawsuits when unlikely outcomes turn out to be the case). These factors combine to drive up the costs of health care, a consequence that is getting increasing attention as these costs continue to rise.
One solution, which does occur but needs to have a much greater place in the practice of medicine, is published guidelines and standards. Doctors can feel more confident in not ordering a test if published guidelines tell them it is not necessary. A published standard of care also effectively shields them from lawsuits – you cannot sue for a bad outcome, only for failing to practice within the standard of care.
Diagnostic guidelines that are well-established and published need to be communicated more thoroughly to doctors in practice, and we can also do a better job at monitoring compliance to such standards.