An experiment in paying through the nose for “unnecessary care”

Rats. Harriet stole what was going to be the title of this post! This is going to be something completely different than what I usually write about. Well, maybe not completely different, but different from the vast majority of my posts. As Dr. Snyder noted on Friday, it’s easy to find new woo-filled claims or dangerous, evidence-lacking trends to write about. Heck, I did it just last week, much to the continued consternation of one of our regular readers and commenters. Examining certain other health-related issues from a science-based perspective is more difficult, but I feel obligated to do it from time to time, not just for a change of pace but to stimulate the synapses and educate myself—and, I hope, you as well—about areas outside of my usual expertise.

We spend a lot of time writing about the scientific basis of medicine, clinical trials, what is and isn’t quackery, and how “complementary and alternative medicine” (CAM) subverts the scientific basis of medicine. However, SBM goes far beyond just that. At least I think of it this way. That’s why I’ve looked at issues that go more to the heart of health policy, which should be based on sound science and evidence. That evidence might take different forms than it does for determining, for example, whether Medicaid results in better health outcomes and by how much health insurance does the same. As is the case with policy issues and economics, conclusions are muddled and messy. The error bars are huge, and the number of potential confounders even huger.

One of the most vexing problems confronting the US health care system is the issue of cost. As has been documented in many places, the US health care system spends more per capita than pretty much anywhere else in the world for outcomes that are, at best, equivalent to those of the health care systems of other industrialized countries. How to fix that problem and “bend the cost curve” downward is the single biggest problem our current health care system faces. The Affordable Care Act (ACA, a.k.a. “Obamacare”) tries to make steps in that direction by emphasizing comparative effectiveness research and finding ways to encourage the use of less expensive treatments that are equally effective and to discourage the use of unnecessary procedures. This is a problem that long predates the ACA.

It’s also a very difficult, seemingly-intractable problem. Part of what contributes to it is a host of practices that are not supported by evidence or science but that continue. A classic example, of course, is the prescribing of antibiotics for viral illnesses. Patients, under the mistaken impression that it will help them, demand it, and physicians, even though they (usually) know better, are all too often willing to acquiesce to their patients’ requests because it’s far easier and takes less time to acquiesce than it does to explain why antibiotics are not necessary. Nor are doctors entirely blameless in this, as we all-too-often hate giving the impression of “doing nothing.”

So what’s the answer? According to this story by Sharon Begley entitled “In healthcare experiment, patients pay more for ‘bad’ medicine” that I saw late last week, it might be behavioral modification:

When Tanner Martin, 17, developed excruciating back pain last year, he was sure he needed an X-ray to find out what was wrong. So was his mother, who worried that the pain might indicate a serious injury that could cause permanent disability.

But Konnie Martin was no ordinary parent. As chief executive officer of San Luis Valley Regional Medical Center in Alamosa, Colorado, she is at the center of an experiment, known as value-based insurance, that could transform American healthcare.

One of the central features of a value-based system is a financial “stick.” If patients insist on medical procedures that science shows to be ineffective or unnecessary, they’ll have to pay for all or most of the cost.

In Tanner’s case, when he and his mother went to the medical center, they were invited to watch a short video first. The best approach to back pain like Tanner’s, it explained, is stretching, strength-building and physical therapy; X-rays and MRIs, according to rigorous studies, are unlikely to make a difference. If they insisted on the X-ray, they would have to pay $300 on top of the basic cost.

They passed on the imaging, knowing they could change their minds if Tanner’s condition worsened. After three weeks of therapy, his back was as good as new.

As you can see from this anecdote that introduces the story and the problem, the basic idea is to make patients pay more if they insist on tests or treatments that, according to science and evidence, won’t help them. The concept seems sound on the surface, but is it? And what is the evidence? The clinical encounter described above came about as a part of a two year experiment by San Luis Valley in what is commonly referred to as “value-based health insurance” or “value-based health care.” It’s nothing new, but with the advent of health insurance and health care reform, health policy wonks have been taking more interest in it.

Not too long ago, Scott Gavura and I wrote about an initiative by the American Board of Internal Medicine Foundation (ABIM) known as Choosing Wisely. It is an initiative in which a challenge, if you will, was issued to professional societies to identify five practices in their specialties that are ineffective and add no value to patient care. As I described, in oncology some of those practices involve imaging for extent of disease in breast cancer (which is very commonly done but has never been shown to improve patient outcomes) and a variety of other practices that are still common in oncology and oncologic surgery. Value-based health care sounds to me like Choosing Wisely on steroids. It could be the logical next step in the progression. Obviously, if a treatment or diagnostic modality has not been validated by science and clinical trials, it doesn’t make sense to pay for it.

The evidence base for value-based health insurance, however, is rather sparse. Begley notes that the San Luis Valley Health experiment only involves 725 covered members and dependents. That’s not an unreasonable number for a pilot experiment, but it’s far from enough to be able to tell whether value-based health care can deliver on its promises. The experiment is going to wrap up at the end of this year, and the data are to be analyzed in 2014. However, at most what will be able to be determined is whether costs are decreased.

One of the experts interviewed by Begley for the piece was Dr. Mark Fendrick, director of the University of Michigan’s Center for Value-Based Insurance Design and a professor of internal medicine. Since the University of Michigan is just up the road from where I work (well, if you consider 45 miles or so “just up the road”), I figured I’d peruse its website and see what it says about value-based insurance:

The basic V-BID premise is to align patients’ out-of-pocket costs, such as copays and premiums, with the value of health services. This approach to designing benefit plans recognizes that different health services have different levels of value. By reducing barriers to high-value treatments (through lower costs to patients) and discouraging low-value treatments (through higher costs to patients), these plans can achieve improved health outcomes at any level of health care expenditure. Studies show that when barriers are reduced, significant increases in patient compliance with recommended treatments and potential cost savings result.

Note that the concept is not just to penalize care that is not scientifically supported but to reward care that is by decreasing barriers to receiving it. For instance, this is one of the studies cited by the University of Michigan. Basically it looked at the effect of an insurance plan reducing copayments for five chronic medication classes, including: angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs); beta-blockers for hypertension and heart disease; diabetes medications (such as oral therapies and insulin); HMG-CoA reductase inhibitors (statins); and inhaled corticosteroids (such as Advair for COPD and other chronic pulmonary conditions). Copayment rates for generic medications were reduced from $5 to zero. Copays for brand-name drugs were lowered 50 percent (from $25 to $12.50 for preferred drugs and from $45 to $22.50 for nonpreferred drugs). All patients in the treatment firm (treatment and control groups were organized by company, not by individual patient) who were already taking any of the intervention medications without a contraindication were eligible for the copay reduction, beginning with their next prescription fill. Copay relief was also available for those who were not taking the medication if they were identified by the clinical alert system as patients who would benefit from the medication.

The study design was temporal, with results examined pre- and post-intervention. For each drug class, people were selected for the sample in a given year if they used the medication within three months of the study year and didn’t have a contraindication for its use or if they were identified as having a clinical indication for the medication but didn’t use it within the previous six months. The overall results were that, compared to another health plan, participants in the plan that decreased its copays for these medications demonstrated reduced nonadherance by as much as 7-14%. Obviously, there are a lot of shortcomings to this study, such as the question of whether the control group is adequate, but it does indicate that there might be merit to this approach. It also suggests that “value-based” health insurance might actually not be the best way to control costs, because lowering the copays actually increases cost, at least in the short term. Whether that short-term increased cost is later compensated for by decreased costs caring for complications of these chronic diseases over the long haul is something that is yet to be determined.

Indeed, according to this recent review, cost savings from value-based insurance designs have been elusive:

While these copay changes to incentivize the use of certain technologies show improvement in some process indicators, they have not yet achieved cost savings.21 For example, the same Blue Cross Blue Shield study found no cost savings.19 Another recent article by Choudhry et al (2011) found no significant difference in costs between a group with no copay for cardiovascular drugs and a group with regular cost sharing.22 The lack of significant cost savings may be due to the short follow-up times in these studies —usually 1 year. More research is needed in the area of cost savings of incentivizing the use of certain services. This will become especially important as the ACA has also required that insurers provide all the USPSTF recommendations free of charge.

In lieu of long-term follow-up, modeling suggests some long-term savings, but these analyses have often focused on raising copays for “low-value” services. For example, the 3 main treatments for prostate cancer vary greatly in their average costs with no evidence that the more expensive treatments result in better outcomes. A radical prostatectomy costs $7,300, brachytherapy costs $19,000, and radiation therapy costs $46,000 on average. Newer forms of radiation treatment can cost close to $100,000 per case, and have not been shown to have any clinical advantages over any of these less expensive options, including watchful waiting. A simple VBID policy would be to modestly increase the cost sharing for these services to encourage more use of the cheapest and equally effective prostatectomy. The authors of the same prostate treatment study estimate $1.7 to $3 billion could be saved directing patients toward the lower-cost treatments.

This suggests another issue. Surgery might be the cheapest treatment for prostate cancer (yes, contrary to what many believe, surgery is often the least expensive option for treating certain diseases), but it is the most invasive and likely to have the most impact on quality of life, with its known complications of retrograde ejaculation and erectile dysfunction. True, radical prostatectomy costs even more if, as is all the rage these days, the da Vinci robot is used. It’s always better with robots, isn’t it? I’m sure Sheldon Cooper would agree—except that it isn’t always. There’s no compelling evidence that robotic prostatectomy results in better outcomes than current laparoscopic prostatectomy. Add to that the problem of overdiagnosis and overtreatment of prostate cancer, which leads to the question of which prostate cancers even need to be treated and even what the very definition of “cancer” should be. The new focus on cost is going to force medicine very quickly to decide how much it values quality of life, because treatments that are effective at eradicating the disease but less so at producing high quality of life are often cheaper.

This is just one example, of course. There are thousands of examples, ranging from common mild conditions to common serious conditions to uncommon conditions. One area, however, where such initiatives might bear fruit and change behavior is by simply hitting doctors and patients over the head with data indicating which treatments do and do not have science and clinical evidence behind them:

The very idea that some diagnostic tests and treatments might not help patients comes as a shock to many Americans.

The Choosing Wisely message is difficult to convey to the many patients who “think that when it comes to medical care newer is better and more is better,” said Dr. Yul Ejnes of Brown University’s Alpert Medical School. “So when patients have more skin in the game (in terms of cost), they’re more likely to ask, do I really need this?”

San Luis Valley Health is self-insured, and the experiment involves only its 725 covered employees and dependents.

The experiment puts medical services in green and red “buckets.” Green is for procedures that should be encouraged because they are cheap and effective, like vaccines. Red is for expensive ones that, research shows, are usually unnecessary, ineffective or even harmful. They include endoscopy for heartburn, surgery for enlarged prostate and the imaging tests that Konnie Martin’s son declined.

Patients (and, unfortunately, a lot of doctors) have a tendency to assume that more care must be better, when in fact often it is not. We here at SBM have discussed how and for what diseases that is often the case many times in the past. On the other hand, policy wonks seem to make a related assumption, namely that “preventative” care will save money, an assumption for which the evidence is at best mixed. The reason is obvious. Preventative medicine costs more and results in more costs treating the diseases and conditions that it uncovers. However, it is not entirely clear that it saves money in the long run through earlier treatment and intervention to prevent the complications of chronic diseases and diagnoses other diseases at an earlier point in their courses. As value-based insurance advocates admit:

It has that potential, although whether the kind of patient targeting VBID proposes saves in long-term costs remains an open question. “Does it make good business sense?” asks Chernew. “It depends on how it is designed. It certainly can. Lowering copayments itself does not necessarily save money, but the programs are designed to make people healthier. We do know that the long-term benefit still requires a comprehensive look.”

Notwithstanding a lack — so far — in established long-term savings, the concept is most certainly gaining favor with large employers, including several members of the National Business Coalition on Health, who are pushing for VBID when they solicit vendors.

That is the crux of the issue, as I’ve mentioned. Will the short term investment yield long term savings? No one knows, and the evidence is mighty sparse. However, it’s another truism in medicine that it’s not possible to wait for perfect data. We have to make the best judgments we can with the science and data that exist now, hopefully making adjustments as new data and science are reported.

Another issue that comes up is the perception of fairness. Communication is the hard part. Consumers love carrots like reduced copays, but they aren’t particularly fond of sticks (like making them pay much of the cost of “unnecessary” tests and treatments out of pocket in addition to their existing health insurance premiums), so to speak, which are perceived as punitive, even if employed in the service of “encouraging” patients and practitioners to use more science- and evidence-based medicine and to stop using medicine that is not supported by science. Advocates of value-based systems claim that they don’t intend to penalize patients for treatments in which the evidence is conflicting, or, as it was stated in Begley’s article, “subject to debate.” However, this question really does bring up an incredibly dicey issue: How much evidence is “enough” to demonstrate that a test or treatment is not of value and therefore should be subject to financial disincentives? Doctors will disagree. If a physician tells a patient that he really needs a treatment, and that treatment is one that is not covered, the patient will feel abused and betrayed, as will the doctor. The example above, in which the patient is offered education, might be the way to handle this problem, but it requires physician buy-in.

More and more, I find myself having to consider issues that I never had to consider much before, such as quality of medical care and cost of services. I never really saw myself ever becoming a health policy wonk or a quality of care analyst, but increasingly such skillsets are becoming necessary to my career. Part of the reason is because, quite through a strange quirk of fate that landed me in the right place at the right time, I find myself co-director of a statewide quality initiative for breast cancer care. I’ve had to learn, and the curve is steep. It’s a fascinating and ridiculously complex area that makes some of my old lab experiments seem very straightforward in comparison. (I am, after all, used to being able to control most variables.) However, it is every bit as much a part of SBM as clinical trials and bench research. We as SBM advocates would do well not to neglect it, because it really is where the “rubber meets the road.”

Posted in: Clinical Trials, Diagnostic tests & procedures, Politics and Regulation, Public Health

Leave a Comment (47) ↓