In my last post, I told you a little story about using science- and evidence-based medicine to improve health care. The focus was primarily on preventing an iatrogenic illness, namely intravenous catheter infections. A researcher came up with a plausible idea for an intervention, studied it, and found it to be successful—the intervention was science-based in that it was proposed based on sound scientific principles; and it is now evidence-based, in that we now know that this intervention prevents infections.
But we don’t really have an easily accessible repository of evidence-based interventions. Every field has its own standards, its own literature, and its up to each individual practitioner to interpret the data on their own.
There are some data bases, such as the USPSTF which gives data for preventative services, and PIER, a service of the American College of Physicians, which gives information on specific diseases and includes interpretations of evidence. There’s also the Cochrane Collaborative, which helps evaluate evidence. But there is no single “go-to” site for these things, and while we follow evidence-based guidelines in much of our care, there are many times when evidence isn’t just hard to find but is actually unavailable.
Give our “evidence gap”, I was heartened to see this story in the New York Times. The Times reports that the economic stimulus bill will include over a billion dollars to fund research into medical evidence. This is a good thing, but it’s bound to be controversial. But I’ve mentioned before that we need to spend money to improve our medical infrastructure, and this could be a step in the right direction.
Much of what we do in medicine is science-based, and much of it has evidence to support it, but some does not. There are plenty of open questions about how we practice medicine, and in order to deliver safe, quality care, we need answers. One example was explored by Dr. Gorski earlier. In another example, a recent study in the New England Journal of Medicine compared surgical and non-surgical therapy for arthritis of the knee. Surgery made logical, scientific sense, but it had never been carefully compared to non-surgical therapy. The study showed that conservative therapy, which is cheaper and less invasive, was just as effective as surgery. This doesn’t mean that surgery will never help, but it is strong evidence that we should treat arthritis of the knee more conservatively. Studies like this aren’t free, but if their results are reliable and repeatable, they may save us a lot of money and possible surgical complications.
So the idea of investing more money into comparing medical treatments makes sense, both scientifically and economically. Now there’s a lot of predictable objections about this; people are worried about physician autonomy and government interference.
As Congress translated the idea into legislation, it became a lightning rod for pharmaceutical and medical-device lobbyists, who fear the findings will be used by insurers or the government to deny coverage for more expensive treatments and, thus, to ration care.
In addition, Republican lawmakers and conservative commentators complained that the legislation would allow the federal government to intrude in a person’s health care by enforcing clinical guidelines and treatment protocols.
I’m not sure that the legislation says anything about enforcing clinical guidelines, but to be fair, there is some implication along those lines.
And so what? Right now, my patients’ insurance programs do exactly the same thing—if I prescribe an angiotensin receptor blocker for blood pressure control, I’m going to be asked to justify why I am giving this rather than the cheaper and as-effective ACE-inhibitor. The answer is usually that the ACE-I caused side-effects, but the question isn’t stupid. Why should an insurer pay more when an equally effective, cheaper alternative is available?
If we have more evidence to work with, we can continue to make even better decisions regarding care. It may seem intrusive, but it’s not very different from what we do already. And honestly, I’d like to know if I’m more likely to get relief of my lumbar radiculopathy from surgery or from conservative therapy. I will not be offended in the least if my surgeon got a call from my insurer asking if surgery was really my best option, as long as the answer was supported by good evidence.
It rings rather hollow when people protest against gaining more knowledge. Libertarian types complain that this will inevitably lead to government interference (and it might, and maybe it should) but to ignore the need for evidence is absurd. We, as physicians and patients, need more knowledge, not less, and we shouldn’t be afraid of where the data lead. Once we have the data, we can sit down for a good, heated discussion about what to do with it. But putting our collective heads in the sand is probably not a useful response.