“Patient-Centered” decision-making is a new buzz-word in medicine. It is a metaphor for a general approach to care that puts the patient’s experience and needs at the center, as opposed to the needs of the physician or the system.
While this is an effective marketing term, and a useful principle as far as it goes, as a guide to medical practice it is a bit simplistic. It needs to be viewed in the context of the overall medical infrastructure and the net effect specific practices have on the cost and effectiveness of medical care.
A 2012 NEJM editorial by Charles Bardes nicely summarizes the issues. He notes that patient-centered care represents the next step in a general trend (a good trend) in the medical profession over the last half-century:
The fifth edition of the Diagnostics and Statistical Manual (DSM-5) was recently released. This is the standard reference of mental disorders and psychiatric illnesses released by the American Psychiatric Association (APA).
As with previous editions there is a great deal of discussion and wringing of hands over the details – which disorders were created or eliminated. For example hoarding is now considered its own disorder, rather than part of obsessive compulsive disorder (it has its own reality TV show, why not its own DSM diagnosis?).
This time around, however, the debate over the DSM goes much deeper than the particulars of specific diagnoses. The real debate is about the very existence of the DSM – its validity and utility. While this discussion is nothing new, it has taken on an unprecedented dimension with the rejection of the DSM by the National Institutes of Mental Health (NIMH). Director Thomas Insel wrote:
The goal of this new manual, as with all previous editions, is to provide a common language for describing psychopathology. While DSM has been described as a “Bible” for the field, it is, at best, a dictionary, creating a set of labels and defining each. The strength of each of the editions of DSM has been “reliability” – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity. Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever. Indeed, symptom-based diagnosis, once common in other areas of medicine, has been largely replaced in the past half century as we have understood that symptoms alone rarely indicate the best choice of treatment.
The Star Trek universe is a fairly optimistic vision of the future. It’s what we would like it to be – an adventure fueled by advanced technology. In the world of Star Trek technology makes life better and causes few problems.
One of the most iconic examples of Star Trek technology is the medical tricorder. What doctor has not fantasized about walking up to a sick patient, waving a handheld device over them, and then having access to all the medical information you could possibly want. No needle sticks for blood tests, no invasive tests, scary MRI machines, and no wait. The information is available instantly.
It’s clear that we are heading in that direction as technology progresses, but how close are we?
The Smartphone in Medicine
Many people in developed nations today are walking around with supercomputers in their pocket – their smartphone. Technological advances are often strange – the ones we anticipate seem to never come, but then life-changing technology creeps up on us.
A recent article in the LA times tells of a husband’s quest to find a treatment for his wife’s Alzheimer’s disease. This is a narrative that journalists know and love—the brave patient or loved-one who won’t accept the nihilism of the medical establishment, who finds a maverick doctor willing to buck the system.
The article itself at least was not gushing, it tended toward a neutral tone, but such articles do tend to instill in the public a very counterproductive attitude toward science and medicine. I would have preferred an exposé of a dubious clinic exploiting desperate patients by peddling false hope. That is a narrative in which journalists rarely engage.
The story revolves around Dr. Edward Tobinick and his practice of perispinal etanercept (Enbrel) for a long and apparently growing list of conditions. Enbrel is an FDA-approved drug for the treatment of severe rheumatoid arthritis. It works by inhibiting tumor necrosis factor (TNF), which is a group of cytokines that are part of the immune system and cause cell death. Enbrel, therefore, can be a powerful anti-inflammatory drug. Tobinick is using Enbrel for many off-label indications, one of which is Alzheimer’s disease (the focus of the LA Times story).
A great deal of science is funded by the US government. The total research funding for 2009 was 54.8 billion dollars (much more if you include all R&D). A breakdown by agency of total R&D shows that the NIH (National Institutes for Health) funding is 28.5 billion while the NSF (National Science Foundation) is 4.1 billion.
There is general agreement that this expenditure is an investment on critical intellectual infrastructure for our nation and is vital to our competitiveness and standard of living. The government certainly has the right, and in fact the duty, to ensure that this money is well-invested. Government oversight is therefore understandable. Inevitably, however, politics is likely to intrude.
Representative Lamar Smith has been developing legislation that would in effect replace the peer-review process by which grants are currently given with a congressional process. Rather than having relevant scientists and experts decide on the merits of proposed research Smith would have politicians decide. It is difficult to imagine a more intrusive and disastrous process for public science funding.
Changing behavior is difficult. It is also an increasing priority for health care. We have entered a period of history when lifestyle choices have a dominant impact on health and longevity. People are no longer dying young of incurable infectious diseases in significant numbers. Rather they are surviving long enough to die from their bad habits.
Further, health behaviors are having a huge impact on the overall cost of health care. So the motivation is greater than ever to impact public health by influencing behavior. Yet, we are not very good at doing this.
It’s not that we’re not trying – it’s simply that having a large influence on people’s day-to-day behavior is remarkably difficult. There is ongoing research looking at how to effectively change behavior at the individual and public level, but it is complex, often conflicting, and new techniques at best yield only marginal gains.
Websites such as Luminosity.com make some bold promises about the effectiveness of computer-based brain-training programs. The site claims:
“Harness your brain’s neuroplasticity and train your way to a brighter life”
“Your brain’s abilities are unique. That’s why your Personalized Training Program adapts to fit your brain and your life goals.”
“Just 10 hours of Lumosity training can create drastic improvements. Track your own amazing progress with our sophisticated tools.”
Wow – in just 10 hours I can become smarter by playing fun video games personalized to my brain. I’m a huge fan of video games, and I would love to justify this hobby by saying that I’m training my brain while I play, but what does the scientific evidence have to say about such claims?
Not surprisingly, the published evidence is complex and mixed.
The integrity of the scientific basis of medicine is under attack from numerous fronts. It is not only the intrusion of pseudoscience and mysticism into mainstream institutions of medicine, but also attempts to distort or game the scientific process for ideological and financial reasons.
Ideological groups such as the anti-vaccine movement, or grassroots organizations promoting pseudodiseases such as chronic Lyme, electromagnetic sensitivity, or Morgellon’s often misrepresent the scientific evidence while they lobby for special privilege to avoid the science-based standard of care within medicine.
Pharmaceutical companies, with billions on the line, have been very creative in figuring out ways to optimize their chances of getting FDA approval for their drugs, and then promoting their drugs to the medical community. Ghost-writing white papers, hiding negative trials, and designing trials to maximize positive outcomes have all been documented.
Defenders of science-based medicine are often confronted with the question (challenged, really): what would it take to convince you that “my sacred cow treatment” works? The challenge contains a thinly veiled accusation — no amount of evidence would convince you because you are a nasty skeptic.
There is a threshold of evidence that would convince me of just about anything, however. In fact, I have been convinced that many scientific claims are likely to be true — sufficiently convinced to act upon the conclusion that they are true. In medicine this means that I am convinced enough to use them as a basis for medical practice.
There are many functional differences between practitioners of SBM and those who accept claims and practices that we would consider to be pseudoscience or fraud, but I was recently struck by one particular such difference — where we set the threshold of evidence before accepting a claim.
In part I of this series I discussed clinical pathways – how clinicians approach problems and the role of diagnosis in this approach. In part II I discussed the thought processes involved in deciding which diagnostic tests are worth ordering.
In this post I will discuss some of the logical fallacies and heuristics that tend to bias and distort clinical reasoning. Many of these cognitive pitfalls apply to patients as well as clinicians.
Pattern recognition and data mining
Science, including the particular manifestation we like to call science-based medicine, is about using objective methods to determine which patterns in the world are really real, vs. those that just seem to be real. The dire need for scientific methodology partly results from the fact that humans have an overwhelming tendency to automatically sift through large amounts of data looking for patterns, and we are very good at detecting patterns, even those that are just random fluctuations in that data.