A recent article in the LA times tells of a husband’s quest to find a treatment for his wife’s Alzheimer’s disease. This is a narrative that journalists know and love—the brave patient or loved-one who won’t accept the nihilism of the medical establishment, who finds a maverick doctor willing to buck the system.
The article itself at least was not gushing, it tended toward a neutral tone, but such articles do tend to instill in the public a very counterproductive attitude toward science and medicine. I would have preferred an exposé of a dubious clinic exploiting desperate patients by peddling false hope. That is a narrative in which journalists rarely engage.
The story revolves around Dr. Edward Tobinick and his practice of perispinal etanercept (Enbrel) for a long and apparently growing list of conditions. Enbrel is an FDA-approved drug for the treatment of severe rheumatoid arthritis. It works by inhibiting tumor necrosis factor (TNF), which is a group of cytokines that are part of the immune system and cause cell death. Enbrel, therefore, can be a powerful anti-inflammatory drug. Tobinick is using Enbrel for many off-label indications, one of which is Alzheimer’s disease (the focus of the LA Times story).
A great deal of science is funded by the US government. The total research funding for 2009 was 54.8 billion dollars (much more if you include all R&D). A breakdown by agency of total R&D shows that the NIH (National Institutes for Health) funding is 28.5 billion while the NSF (National Science Foundation) is 4.1 billion.
There is general agreement that this expenditure is an investment on critical intellectual infrastructure for our nation and is vital to our competitiveness and standard of living. The government certainly has the right, and in fact the duty, to ensure that this money is well-invested. Government oversight is therefore understandable. Inevitably, however, politics is likely to intrude.
Representative Lamar Smith has been developing legislation that would in effect replace the peer-review process by which grants are currently given with a congressional process. Rather than having relevant scientists and experts decide on the merits of proposed research Smith would have politicians decide. It is difficult to imagine a more intrusive and disastrous process for public science funding.
Changing behavior is difficult. It is also an increasing priority for health care. We have entered a period of history when lifestyle choices have a dominant impact on health and longevity. People are no longer dying young of incurable infectious diseases in significant numbers. Rather they are surviving long enough to die from their bad habits.
Further, health behaviors are having a huge impact on the overall cost of health care. So the motivation is greater than ever to impact public health by influencing behavior. Yet, we are not very good at doing this.
It’s not that we’re not trying – it’s simply that having a large influence on people’s day-to-day behavior is remarkably difficult. There is ongoing research looking at how to effectively change behavior at the individual and public level, but it is complex, often conflicting, and new techniques at best yield only marginal gains.
Websites such as Luminosity.com make some bold promises about the effectiveness of computer-based brain-training programs. The site claims:
“Harness your brain’s neuroplasticity and train your way to a brighter life”
“Your brain’s abilities are unique. That’s why your Personalized Training Program adapts to fit your brain and your life goals.”
“Just 10 hours of Lumosity training can create drastic improvements. Track your own amazing progress with our sophisticated tools.”
Wow – in just 10 hours I can become smarter by playing fun video games personalized to my brain. I’m a huge fan of video games, and I would love to justify this hobby by saying that I’m training my brain while I play, but what does the scientific evidence have to say about such claims?
Not surprisingly, the published evidence is complex and mixed.
The integrity of the scientific basis of medicine is under attack from numerous fronts. It is not only the intrusion of pseudoscience and mysticism into mainstream institutions of medicine, but also attempts to distort or game the scientific process for ideological and financial reasons.
Ideological groups such as the anti-vaccine movement, or grassroots organizations promoting pseudodiseases such as chronic Lyme, electromagnetic sensitivity, or Morgellon’s often misrepresent the scientific evidence while they lobby for special privilege to avoid the science-based standard of care within medicine.
Pharmaceutical companies, with billions on the line, have been very creative in figuring out ways to optimize their chances of getting FDA approval for their drugs, and then promoting their drugs to the medical community. Ghost-writing white papers, hiding negative trials, and designing trials to maximize positive outcomes have all been documented.
Defenders of science-based medicine are often confronted with the question (challenged, really): what would it take to convince you that “my sacred cow treatment” works? The challenge contains a thinly veiled accusation — no amount of evidence would convince you because you are a nasty skeptic.
There is a threshold of evidence that would convince me of just about anything, however. In fact, I have been convinced that many scientific claims are likely to be true — sufficiently convinced to act upon the conclusion that they are true. In medicine this means that I am convinced enough to use them as a basis for medical practice.
There are many functional differences between practitioners of SBM and those who accept claims and practices that we would consider to be pseudoscience or fraud, but I was recently struck by one particular such difference — where we set the threshold of evidence before accepting a claim.
In part I of this series I discussed clinical pathways – how clinicians approach problems and the role of diagnosis in this approach. In part II I discussed the thought processes involved in deciding which diagnostic tests are worth ordering.
In this post I will discuss some of the logical fallacies and heuristics that tend to bias and distort clinical reasoning. Many of these cognitive pitfalls apply to patients as well as clinicians.
Pattern recognition and data mining
Science, including the particular manifestation we like to call science-based medicine, is about using objective methods to determine which patterns in the world are really real, vs. those that just seem to be real. The dire need for scientific methodology partly results from the fact that humans have an overwhelming tendency to automatically sift through large amounts of data looking for patterns, and we are very good at detecting patterns, even those that are just random fluctuations in that data.
This is the second in a brief series of posts about how clinicians think. My purpose here is to elucidate how skeptical principles apply to clinical decision-making, but also as background to provide context to many of the articles we publish here. In this installment I will review the factors that clinicians consider when deciding what tests to order for screening and when conducting a diagnostic workup.
The gunshot approach
Last week I discussed the “Dr. House” approach to medicine, using that particular TV character as an example of how medicine is often portrayed in fiction. Another aspect of the Dr. House image that is very misleading is his approach to diagnosis, which tends to be very linear. He decides what the most likely diagnosis is, then proceeds to either treat that entity or order a confirmatory diagnostic test. When that diagnosis fails, he then proceeds onto diagnosis B. A string of such failures then culminates in a flash of brilliance that allows him to make the actual obscure diagnosis and cure the patient. This approach is optimized for storytelling and drama, but is not how actual clinicians operate.
At the other end of the spectrum is what doctors often refer to as “the gunshot approach” – test for everything in hopes that you hit something. Another derogatory term that doctors throw around is “a fishing expedition,” referring to a diagnostic approach that amounts to hunting around for any possible diagnosis without having a real justification.
I practice in a university clinic which functions partly as a tertiary referral center, which means we get referrals from other specialists. I also get many referrals for second opinions. Sometimes the entire cause for the patient’s desire for a second opinion, it seems to me, is the simple fact that they did not understand the reasoning of the previous specialist. They were given a diagnosis and a course of treatment, but not an explanation of how their doctor arrived at those conclusions.
I am not being judgmental – different practices are under different pressures and time constraints, and it can be very difficult to gauge a patient’s understanding. Often the physician and the patient are proceeding based upon differing assumptions and narratives that are not expressly stated. The doctor may think they have explained the situation entirely, but simply did not confront misleading assumptions they were not aware their patient had.
This is part of the advantage of engaging the public about health issues and confronting pseudoscience, myths, and misconceptions – you develop a deep awareness of how the general public thinks about medicine.
“I intend to live forever. So far, so good.”
– Steven Wright
The humor in many of comedian Steven Wright’s famous one-liners is that they are simultaneously familiar and absurd. At some level we all know that we are going to die, but as long as we are still alive (or a loved-one is alive) we can cling to the irrational hope, the impossible denial, that death remains a distant abstract concept, not an near inevitability.
We all need to come to terms with death in our own private way, but often those terms are not private because they drive our use (for ourselves or others) of increasingly expensive health care. Two essays over the last year by doctors explored this issue, noting that when doctors face their own mortality they often make different health care decisions for themselves than the general public.
In February of 2012, Dr. Ken Murray wrote an essay in The Wall Street Journal – Why Doctors Die Differently. His primary thesis was that doctors choose less end-of-life care for themselves than the average patient. They do so largely because they are intimately familiar with the futility of much of what we do for patients who are likely going to die anyway. As one example, CPR has a success rate of about 8%, with only 3% of people receiving it going on to have a near-normal quality of life. Those numbers are pretty grim. Meanwhile, TV depictions of CPR are successful 75% of the time with 67% returning to normal life. Sometimes the person wakes up during the CPR, is fine, and then goes on to thwart a terrorist attack without missing a beat.