The integrity of the scientific basis of medicine is under attack from numerous fronts. It is not only the intrusion of pseudoscience and mysticism into mainstream institutions of medicine, but also attempts to distort or game the scientific process for ideological and financial reasons.
Ideological groups such as the anti-vaccine movement, or grassroots organizations promoting pseudodiseases such as chronic Lyme, electromagnetic sensitivity, or Morgellon’s often misrepresent the scientific evidence while they lobby for special privilege to avoid the science-based standard of care within medicine.
Pharmaceutical companies, with billions on the line, have been very creative in figuring out ways to optimize their chances of getting FDA approval for their drugs, and then promoting their drugs to the medical community. Ghost-writing white papers, hiding negative trials, and designing trials to maximize positive outcomes have all been documented.
Defenders of science-based medicine are often confronted with the question (challenged, really): what would it take to convince you that “my sacred cow treatment” works? The challenge contains a thinly veiled accusation — no amount of evidence would convince you because you are a nasty skeptic.
There is a threshold of evidence that would convince me of just about anything, however. In fact, I have been convinced that many scientific claims are likely to be true — sufficiently convinced to act upon the conclusion that they are true. In medicine this means that I am convinced enough to use them as a basis for medical practice.
There are many functional differences between practitioners of SBM and those who accept claims and practices that we would consider to be pseudoscience or fraud, but I was recently struck by one particular such difference — where we set the threshold of evidence before accepting a claim.
In part I of this series I discussed clinical pathways – how clinicians approach problems and the role of diagnosis in this approach. In part II I discussed the thought processes involved in deciding which diagnostic tests are worth ordering.
In this post I will discuss some of the logical fallacies and heuristics that tend to bias and distort clinical reasoning. Many of these cognitive pitfalls apply to patients as well as clinicians.
Pattern recognition and data mining
Science, including the particular manifestation we like to call science-based medicine, is about using objective methods to determine which patterns in the world are really real, vs. those that just seem to be real. The dire need for scientific methodology partly results from the fact that humans have an overwhelming tendency to automatically sift through large amounts of data looking for patterns, and we are very good at detecting patterns, even those that are just random fluctuations in that data.
This is the second in a brief series of posts about how clinicians think. My purpose here is to elucidate how skeptical principles apply to clinical decision-making, but also as background to provide context to many of the articles we publish here. In this installment I will review the factors that clinicians consider when deciding what tests to order for screening and when conducting a diagnostic workup.
The gunshot approach
Last week I discussed the “Dr. House” approach to medicine, using that particular TV character as an example of how medicine is often portrayed in fiction. Another aspect of the Dr. House image that is very misleading is his approach to diagnosis, which tends to be very linear. He decides what the most likely diagnosis is, then proceeds to either treat that entity or order a confirmatory diagnostic test. When that diagnosis fails, he then proceeds onto diagnosis B. A string of such failures then culminates in a flash of brilliance that allows him to make the actual obscure diagnosis and cure the patient. This approach is optimized for storytelling and drama, but is not how actual clinicians operate.
At the other end of the spectrum is what doctors often refer to as “the gunshot approach” – test for everything in hopes that you hit something. Another derogatory term that doctors throw around is “a fishing expedition,” referring to a diagnostic approach that amounts to hunting around for any possible diagnosis without having a real justification.
I practice in a university clinic which functions partly as a tertiary referral center, which means we get referrals from other specialists. I also get many referrals for second opinions. Sometimes the entire cause for the patient’s desire for a second opinion, it seems to me, is the simple fact that they did not understand the reasoning of the previous specialist. They were given a diagnosis and a course of treatment, but not an explanation of how their doctor arrived at those conclusions.
I am not being judgmental – different practices are under different pressures and time constraints, and it can be very difficult to gauge a patient’s understanding. Often the physician and the patient are proceeding based upon differing assumptions and narratives that are not expressly stated. The doctor may think they have explained the situation entirely, but simply did not confront misleading assumptions they were not aware their patient had.
This is part of the advantage of engaging the public about health issues and confronting pseudoscience, myths, and misconceptions – you develop a deep awareness of how the general public thinks about medicine.
“I intend to live forever. So far, so good.”
– Steven Wright
The humor in many of comedian Steven Wright’s famous one-liners is that they are simultaneously familiar and absurd. At some level we all know that we are going to die, but as long as we are still alive (or a loved-one is alive) we can cling to the irrational hope, the impossible denial, that death remains a distant abstract concept, not an near inevitability.
We all need to come to terms with death in our own private way, but often those terms are not private because they drive our use (for ourselves or others) of increasingly expensive health care. Two essays over the last year by doctors explored this issue, noting that when doctors face their own mortality they often make different health care decisions for themselves than the general public.
In February of 2012, Dr. Ken Murray wrote an essay in The Wall Street Journal – Why Doctors Die Differently. His primary thesis was that doctors choose less end-of-life care for themselves than the average patient. They do so largely because they are intimately familiar with the futility of much of what we do for patients who are likely going to die anyway. As one example, CPR has a success rate of about 8%, with only 3% of people receiving it going on to have a near-normal quality of life. Those numbers are pretty grim. Meanwhile, TV depictions of CPR are successful 75% of the time with 67% returning to normal life. Sometimes the person wakes up during the CPR, is fine, and then goes on to thwart a terrorist attack without missing a beat.
[NEW POSTS JUST BELOW THIS POST]
I am happy to announce that Science-Based Medicine has published three e-Books:
A recent study published in the Proceedings of the National Academy of Sciences calls into question the standard mouse model of sepsis, trauma, and infection. The research is an excellent example of how proper science investigates its own methods.
Mouse and other animal models are essential to biomedical research. The goal is to find a specific animal model of a human disease and then conduct preliminary research on the animal model in order to determine which research is promising enough to study in humans. There are also non-animal assays and “test tube” type research that is used to screen potential treatments, but scientists still prefer a good animal model.
It is also understood that animal models are imperfect – mice are not humans, after all. Animal research is therefore not a substitute for human research. I and other SBM authors have regularly criticized proponents of dubious treatments who make clinical claims based upon preliminary animal research. Until something is studied in humans, we cannot make any reliable claims about its safety and efficacy in people.
Snake oil often resides on the apparent cutting edge of medical advance. This is a marketing strategy – exploiting the media hype that often precedes actual scientific advances (even ones that don’t eventually pan out). The slogan of this approach could be, “Turning tomorrow’s possible cures into today’s pseudoscientific snake oil.”
The strategy works because, to the average person, the claims will sound plausible and scientific and will contain familiar scientific buzz words. There is therefore a proliferation of stem cell clinics, anti-oxidant supplements, and personalized genetic medicine.
We can add to the list of cutting edge pseudoscience, neural plasticity and brain training. Neuroscientists are discovering that even the adult brain has greater capacity for plasticity than was previously thought. Plasticity is the capacity of the brain to rewire itself, to acquire new abilities or compensate for damage. Mostly this is simply a technical description of a very common phenomenon – learning. Shoot a basketball 1000 times and (surprising to no one) you (meaning your brain) will get better at shooting baskets. Some of this is physical, such as developing the necessary strength in the involved muscles, but mostly this is the brain learning how to shoot baskets through plasticity.
“Medicine is a very religious experience. I have my religion and you have yours. It becomes difficult for us to agree on what we think works, since so much of it is in the eye of the beholder. Data is rarely clean. You find the arguments that support your data, and it’s my fact versus your fact.”
– Mehmet Oz
The above quote is from a recent article for the New Yorker by Michael Specter about Dr. Oz, the most currently popular TV doctor. Specter described this sentiment as “chilling.” To me it sounds like a manifesto – a postmodernist attack on the scientific basic of modern medicine.
In my experience, this sentiment is often at the core of belief in so-called complementary and alternative medicine (CAM). In order to seem respectable and infiltrate the institutions of medical academia, proponents of CAM will say that their treatments are evidence-based and that they are scientific. They have a serious problem, however – their treatments are not evidence-based and are often grossly unscientific. Whenever someone bothers to look at their evidence and examine their science, therefore, they start to backtrack, eventually arriving at their true position, a postmodernist dismissal of science resembling Oz’s statement above. I have heard a hundred versions of the Oz manifesto from CAM supporters.