Articles

Keeping the customer satisfied

One thing about blogging once a week or so compared to my other blogging gig, which is usually close to every day, occasionally more often, is that I really can’t cover everything I want to cover for this blog. Even more so than at my not-so-super-secret other blogging gig, I have to pass on topics that could be fodder for what could be excellent to even awesome posts—or, self-congratulating hyperbole aside, at least reasonably interesting to the readers of this blog. When that happens, I can only hope that one of my co-bloggers picks up on it and gives the subject matter the treatment it cries out for. Or, sometimes, such subject matter just has to be dealth with elsewhere by me—or not at all. Even a hypercaffeinated blogger like myself has limits.

Sometimes, however, I actually get a second chance. In other words, I get a chance to revisit a topic that I passed by. Usually, this happens when something new happens that gives me an excuse to revisit the topic. So it was last of week, when I was perusing the New York Times by an oncology nurse named Theresa Brown. Her article was titled, appropriately enough, Hospitals Aren’t Hotels. It will become very apparent very quickly why in a moment. But first, let’s sample Brown’s article a bit, because it brings up an issue that is very pertinent to science-based medicine:

“You should never do this procedure without pain medicine,” the senior surgeon told a resident. “This is one of the most painful things we do.”

She wasn’t scolding, just firm, and she was telling the truth. The patient needed pleurodesis, a treatment that involves abrading the lining of the lungs in an attempt to stop fluid from collecting there. A tube inserted between the two layers of protective lung tissue drains the liquid, and then an irritant is slowly injected back into the tube. The tissue becomes inflamed and sticks together, the idea being that fluid cannot accumulate where there’s no space.

I have watched patients go through pleurodesis, and even with pain medication, they suffer. We injure them in this controlled, short-term way to prevent long-term recurrence of a much more serious problem: fluid around the lungs makes it very hard to breathe.

A lot of what we do in medicine, and especially in modern hospital care, adheres to this same formulation. We hurt people because it’s the only way we know to make them better. This is the nature of our work, which is why the growing focus on measuring “patient satisfaction” as a way to judge the quality of a hospital’s care is worrisomely off the mark.

As a surgical resident, I rotated on the thoracic surgery service on multiple occasions over the course of my five clinical years. As part of my duties on that service, I’d see all the chest tube and pleurodesis consults, put in most, if not all, of the chest tubes, and do most of the pleurodesis procedures, at first supervised by the cardiothoracic fellow and then on my own. They’re not difficult procedures (I had learned to insert chest tubes when I was an intern on the trauma service) and learning to do pleurodesis wasn’t difficult either. I’m not sure if pleurodesis is the most painful procedure I did, but it certainly wasn’t pleasant. Think of the procedure this way: The goal of the procedure is to suck out all of the pleural effusion (a collection of fluid surrounding the lung) and then to get the pleura lining the lung to stick to the pleura lining the inside of the chest wall by, in essence, “roughing them up” so that they become very inflamed and stick together when they heal. The “irritant” (as Brown puts it) that we used to use was nothing more fancy than a slurry of sterile talc, although there are other irritants that can be used, such as bleomycin, tetracycline, povidone iodine.

The point, of course, is that we as doctors sometimes have to be, as Nick Lowe once put it, “cruel to be kind in the right measure.” As a surgeon, I’m acutely aware of this necessity. Surgery hurts. There’s just no way to get around it. The best we can do is to try to minimize the pain we cause by slicing people open and forcibly rearranging or removing parts of their anatomy for (hopefully) therapeutic intent; we can’t eliminate the pain. But even surgery isn’t the worst that we as physicians inflict upon patients in the name of trying to heal them. For examples, my colleagues in medical oncology administer highly toxic chemicals to patients, chemicals that make their hair fall out, temporarily weaken their immune systems, rendering them susceptible to life-threatening infections, cause neuropathy, and produce all sorts of other adverse effects. Think of bone marrow transplants with stem cell rescue. It’s a procedure in which doctors literally destroy the patient’s existing bone marrow (and thus the vast majority of his immune system) and then reconstitute it using either the patient’s own hematopoietic stem cells or marrow obtained from a donor. It’s unpleasant, takes weeks, and puts the patient at risk for death from the procedure and, in the case when the marrow used is from a donor instead of the patient, puts the patient at risk for graft versus host disease.

We as physicians don’t do these things to patients because we like to cause suffering. We do them because what science tells us about the diseases being treated also tells us that these are the sorts of things we have to do to save the lives of patients with serious diseases. In the case of cancer, for instance, gentler interventions just don’t work as well. Contrary to how some quackery propagandists like to portray physicians, we don’t do these things because we get off on it. We do them because the benefits outweigh the risks, and because we can save lives. Meanwhile, we are continually doing research to find treatments that are more efficacious, as well as less risky and unpleasant. It might well be that doctors in the far future will recoil in horror at our current treatments, much the way Dr. McCoy did when he encountered dialysis and chemotherapy, but they’re the best that we have now, and we are working to improve them.

Brown is correct when she points out that these days patient satisfaction is becoming more and more important in judging how well hospitals and physicians are doing. She also points out that by October 2012—yes, this year—Medicare reimbursements will be linked in part to a patient satisfaction survey administered by the government known as the Hospital Consumer Assessment of Healthcare Providers and Systems survey. While the survey itself, as Brown also points out, measures aspects of health care that are important, such as communication between the physicians, nurses, and staff and the patient, how well the patient was educated about his condition, and how clear discharge instructions were, note the underlying assumption behind such surveys is that patient satisfaction correlates with high quality care, But is that true? Brown has her doubts, as do I:

These are important questions. But implied in the proposal is a troubling misapprehension of how unpleasant a lot of actual health care is. The survey measures the “patient experience of care” to generate information important to “consumers.” Put colloquially, it evaluates hospital patients’ level of satisfaction.

The problem with this metric is that a lot of hospital care is, like pleurodesis, invasive, painful and even dehumanizing. Surgery leaves incisional pain as well as internal hurts from the removal of a gallbladder or tumor, or the repair of a broken bone. Chemotherapy weakens the immune system. We might like to say it shouldn’t be, but physical pain, and its concomitant emotional suffering, tend to be inseparable from standard care.

Certainly, good communication, for instance, is essential to patient care and it’s not unreasonable to think that good communication will tend to lead to more satisfied patients. Brown points out that it “ain’t necessarily so” that these sorts of metrics correlate with outcomes, and I’ll show you as study that shows why it ain’t necessarily so. I seriously wonder whether part of what’s fueling the rise of “integrative medicine,” formerly known as “complementary and alternative medicine” (CAM), formerly known as “alternative medicine,” is more than just credulity on the part of physicians, but the result of a successful effort on the part of CAM practitioners and believers to portray CAM as being somehow more responsive to patients than “conventional” medicine. I discussed this in a different context when observing how CAM has tried to co-opt the brand of “patient-centered” care, corrupting it to become equivalent to giving the patient what he, even if what the patient wants isn’t necessarily in his best interest. Clearly, at the very least there is synergy between the rise of the cult of patient satisfaction as the be-all and end-all of quality metrics and the rise of CAM as a means of increasing patient satisfaction without evidence for improving outcomes.

In any case, if there’s one thing that practitioners and promoters of integrative medicine and hospital administrators seem to agree on, it’s that “patient satisfaction” is very, very important. In the conventional medical world, it’s become so important that hospital administrators live and die by patient satisfaction surveys. In particular, they live and die by a survey that makes the HCAHPS seem fairly reasonable (it does, after all, pay particular attention to communication and the clarity of discharge instructions), namely an instrument known as the Press-Ganey survey. In fact, Press-Ganey itself sells its services as “driving performance excellence” in health care. The inherent assumption is that if patients are satisfied then the hospital and health care providers must be doing a good job. But it’s subtler than that. The underlying assumption is actually that patient satisfaction equals quality, and the further assumption is that Press-Ganey scores reflect patient satisfaction. Never mind that Press-Ganey scores include questions about a whole host of things that have nothing to do with the quality of care. For example, parking was a huge problem at the cancer center where I worked at my previous job, and Press-Ganey scores have always taken a hit because parking is a big issue in the surveys. Does that mean we delivered inferior care there? I don’t think so. Both cancer centers I’ve worked for since finishing my fellowship are among the elite 41 cancer centers designated as Comprehensive Cancer Centers by the National Cancer Institute. As far as cancer goes, these are the best of the best.

Similarly, promoters of CAM/IM seem to believe in patient satisfaction über alles. In fact, two large surveys of the state of “integrative medicine” in the U.S. have been published in the last six months or so, one by the Samueli Institute and one by the sugar daddy of quackademic medicine, the Bravewell Collaborative. Did either of them look at actual hard patient outcomes? Well, not exactly. If you’re talking about actual medical outcomes, as in outcomes research, the answer is a resounding no, although incongruously the Samueli survey noted that 71% of hospitals that started integrative medicine programs did so for reasons of “clinical effectiveness.” At least, that’s what they said. How they would know, I have no idea given the lack of outcomes research. Neither survey even bothered with outcomes, other than the risible question asked in the Bravewell survey about which areas each integrative medicine center director thought integrative medicine had the best “success.” As I pointed out at the time, this was a question so incredibly vague that, unless the patient dropped dead immediately upon contact with the therapy recommended, centers could claim some level of “success” with almost anything, and they did. No metrics for what constitutes “success” were described. Instead, the survey just took the word of integrative medical center directors for it regarding their responses to a question about what conditions they considered to be conditions “among their top five most successfully treated conditions.” Not surprisingly, Samueli noted that, while 85% of the integrative medicine centers it surveyed will use patient satisfaction as a metric to evaluate their CAM programs, only 42% plan on evaluating health outcomes and only 31% evaluate quality.

That’s why, if you’re talking about “outcomes” as in patient satisfaction outcomes, then the answer is yes. Both “integrative medical center” surveys focused like a laser beam on patient satisfaction. Lacking any concrete measures for the quality of care they provide, apparently CAM/IM promoters bragged about how happy their patients are with their services. Of course, this makes perfect sense, given that both surveys were nothing more than one huge exercise in argumentum ad populum. Trying to argue that people are very happy with your service is part and parcel of that. Certainly the CAMsters aren’t trying ot argue for the superiority of their woo based on science. Think of it this way. Lots of people are “very satisfied” with the astrologers, psychic mediums, and faith healers that they use. That doesn’t mean that astrology is a science, that John Edward can talk to the dead, or that John of God can use the healing power of Jesus to cure what ails you.

So, both promoters of “integrative” medicine and a large segment of conventional medicine view patient satisfaction as being a major indicator (but, in all fairness, not the only indicator) of quality care. But, again, is this assumption valid? Does patient satisfaction correlate with high quality care? You might be surprised at the answer suggested by a published this February in the Archives of Internal Medicine from a group out of UC-Davis entitled The Cost of Satisfaction: A National Study of Patient Satisfaction, Health Care Utilization, Expenditures, and Mortality.

This study was designed to look for correlations between patient satisfaction and outcomes, asking the question: Is there a correlation between health care outcomes and patient satisfaction? The authors conclude that the answer is yes, but not in the way that you’d think. In fact, if this study is to be believed, if anything increased patient satisfaction is correlated with worse outcomes in at least some measures. Let’s go to the tape (or the peer-reviewed study).

The authors frame the question thusly:

Satisfied patients are more adherent to physician recommendations and more loyal to physicians, but research suggests a tenuous link between patient satisfaction and health care quality and outcomes. Among a vulnerable older population, patient satisfaction had no association with the technical quality of geriatric care,8 and evidence suggests that satisfaction has little or no correlation with Health Plan Employer Data and Information Set quality metrics.

In addition, patients often request discretionary services that are of little or no medical benefit, and physicians frequently accede to these requests, which is associated with higher patient satisfaction. Physicians whose compensation is more strongly linked with patient satisfaction are more likely to deliver discretionary services, such as advanced imaging for acute low back pain.

In order to investigate the relationship between patient satisfaction and outcomes, the investigators undertook a prospective cohort study. Basically, they looked at respondents to the MEPS from 2000 to 2007. The MEPS is described thusly:

The MEPS is an annual nationally representative survey of the US civilian noninstitutionalized population assessing access to, use of, and costs associated with medical services. The MEPS household component uses an overlapping panel design in which individuals are interviewed successively during 2 years. During each year, respondents complete self-administered questionnaires about health status and their experiences with health care. The MEPS sampling frame is drawn from respondents to the National Health Interview Survey, an annual in-person household survey conducted by the National Center for Health Statistics. The National Health Interview Survey data are linked with death certificate data from the National Death Index, enabling mortality ascertainment among MEPS participants.

In brief, the investigators followed over 50,000 adults and linked their questionnaire responses to their mortality outcomes. The results were both good and bad, although, quite honestly, from my perspective they were mostly bad. Let’s start with the good. One correlation that was noted was that patients with higher levels of satisfaction with their care tended to use the emergency room less. It wasn’t a huge amount less, though. In fact, the adjusted odds ratio found by the authors was only 0.92, which means that the patients who had the highest level of satisfaction (the highest quartile on the survey) were not that much less likely to use the emergency room during the study period than those with the lowest level of satisfaction (the lowest quartile). All in all, not that impressively different, but it definitely has to be acknowledged as a positive.

Now let’s look at the negatives, which to my mind are much more negative.

Patients in the study who demonstrated the highest level of satisfaction were more likely to have an inpatient admission (adjusted odds ratio 1.12) than those with the lowest levels of satisfaction, again not that huge a difference. They did, however, account for 8.8% more health care expenditures, including greater prescription drug expenditures. Worst of all, they demonstrated a higher mortality, with an odds ratio of 1.26.

Data like this always have to make you wonder. Is there a confounding variable that accounts for the negative correlation between patient satisfaction and outcomes? And it’s certainly possible that there may be. However, even if there is, at the very least this study is pretty strong evidence that there isn’t much, if any, correlation between patient satisfaction and the actual quality of care as measured by overall mortality results. Why might this be? The authors provide some possible explanations in the discussion. One aspect of this relationship is that patient satisfaction does correlate with how much the physician fulfills the patient’s wishes and expectiations:

Patients typically bring expectations to medical encounters, often making specific requests of physicians, and satisfaction correlates with the extent to which physicians fulfill patient expectations. Patient requests have also been shown to have a powerful influence on physician prescribing behavior, and our findings suggest that patient satisfaction may be particularly strongly linked with prescription drug expenditures.

In other words, giving the patient what he or she wants isn’t always what’s best for the patient. As “paternalistic,” as this might sound, this is not a new observation. Physicians have known this for a very long time. Perhaps the most striking example of this phenomenon is antivaccine parents. Such parents don’t want their children to be vaccinated, but not vaccinating is rarely in the best interests of the child. Physicians who just go along with such parents, such as antivaccine apologist Dr. Jay Gordon, are very popular and likely generate high Press-Ganey scores because they basically give the people what they want. In contrast, pediatricians who try to do the right thing and persuade such parents to vaccinate their children (or even fire such patients whose parents won’t vaccinate) don’t and as a result generate a lot less patient satisfaction. This is an intentionally chosen extreme example, but the same sort of dynamic occurs in more subtle ways in every patient encounter. A less extreme example is very common in primary care, specifically the example of the patient who demands antibiotics for a viral infection. The doctor who acquiesces will have the more satisfied patient than the doctor who does not. But who provided the better care? Not the doctor who gave unnecessary antibiotics, which can select for resistant organisms and cause complications for no benefit.

In an accompanying editorial, Dr. Brenda Sirovich notes that patients very much like early detection and aggressive intervention, even when that intervention might not necessarily be helping them. She tells the story of “A Healthy Man’s Nightmare,” which was an article published in the New Yorker recounting how literature professor Joseph Epstein went from thinking himself healthy at age 60 to surviving a coronary artery bypass surgery. It all started based on a “routine” physical that revealed a low high density lipoprotein cholesterol level, which in turn led directly to a stress test, which in turn led to…well, you get the idea. The result was that Epstein, at 62, considers himself “weakened, with a lasting sense of vulnerability that he eloquently labels ‘heart-consciousness.’” Did aggressive screening help or harm Epstein? It’s not clear. However, Epstein considers himself “lucky” and attributes his good fortune to his physicians, whom he describes as “paragons of excellence.” This anecdote leads Dr. Sirovich to speculate:

Regardless of whether one believes Mr Epstein to have been ultimately helped or harmed by his screening stress test, his satisfaction with the experience should perhaps not be as surprising as I initially found it. Satisfaction with seemingly adverse outcomes of potentially excessive medical care appears to be the norm. Numerous studies have found that patients are consistently highly satisfied with one of the most common downsides of medical care— false-positive test results and the downstream events that follow.5,6 Moreover, such patients are more likely to undergo the same (and likely other) testing in the future, dismissing their anxiety and other adverse effects as a negligible price for a good outcome.

In other words, many, if not most, patients tend to like aggressive intervention, and they tend not to like “watchful waiting.” They want to do something. That’s part of what’s driving the whole controversy over screening, be it screening for cancer, heart disease, or any of the other conditions we routinely screen for. Screening is a complex equation in which balancing risks and benefits is anything but simple. Overaggressive screening can lead to overdiagnosis and overtreatment whose harm outweighs the benefit in terms of lives saved by early detection and intervention. Moreover, the pressure isn’t just from the patient or on the patient. There are what Sirovich refers to as “positive feedback loops” that pressure doctors, too:

The same heuristic operates on the physician. Ransohoff et al7 proposed, a decade ago, that prostatespecific antigen (PSA) screening for prostate cancer exemplifies a system without negative feedback. Regardless of the true net effect (beneficial or harmful) of screening, a physician ordering a screening PSA receives a favorable result: he can reassure the patient with a normal PSA result; celebrate with the patient who has overcome a false positive; or (most compelling for the physician) offer potentially life-saving treatment to the patient whose prostate cancer was “caught early”— notwithstanding the likelihood that the patient’s outcome may be worse because of early detection. Regardless, the physician can feel satisfied, and more certain that ordering the next screening PSA will be the right decision, which will then appear to be the case, and so on.

Positive feedback systems abound in health care, for both physicians and patients. Diagnostically, almost any unnecessary, or discretionary, test (particularly imaging) has a good chance of detecting an abnormality. Acting on that abnormality has an excellent chance of producing a favorable outcome (because a good outcome was already highly likely). Having obtained an excellent outcome, ostensibly owing to a test that was seemingly unnecessary, a natural reaction would be thereafter to perform (or, for patients, undergo) even more discretionary testing in patients with an increasingly negligible likelihood of benefit—and greater risk of net harm.

So, on both sides of the equation, the patient’s and the physician’s, there are many apparent rewards for delivering more care and few disincentives for doing so, at least on the level of the individual patient. It takes outcomes research and randomized studies to determine whether providing “more” care actually does what it is intended to do, how likely it is to benefit each patient, and how likely it is to harm each patient. Sirovich notes that she still thinks there’s an unidentified confounder in this study, given that the excess mortality far exceeded the excess rate of emergency room utilization in the most “satisifed” quartile, but she also points out that this result is plausible, based on what we know already. I agree, which leads me to a bit of a stray thought that I’d like to see discussed in the comments.

That stray thought is that maybe the popularity of CAM is arising from this same impulse, both on the part of physicians and the part of patients. For example, patients, faced with conditions for which standard science-based medicine has little to offer—or for which what SBM offers is too unpleasant and brutal—still want to do something. So they seek out remedies and treatment modalities that promise to do something for them with much less invasiveness, less “impersonal” dealings with the health care system, and less pain. Physicians, on the other hand, faced with patients for whom what SBM has to offer is seemingly unsatisfactory, still want to do something. Well, CAM is something, and, for those doctors who are not as scientifically inclined as we are and who also aren’t as aware of the cognitive tricks that lead us to incorrectly infer causation from placebo effects, observer bias, confirmation bias, and correlation, dabbling in CAM will rapidly lead to apparently “positive” results, much as doing “unnecessary tests” does. Once that happens, the tendency is do recommend even more CAM to patients. Before too long, the more credulous can turn into Andrew Weil or Mark Hyman. The more average just look the other way and sometimes refer patients with recalcitrant to acupuncturists or chiropractors. Uncommon is the doctor who avoids becoming at least a shruggie.

Whether my speculation has any validity to it or whether this new obsession with “patient satisfication” is a major contributor to the rise of CAM and “integrative medicine,” Remember this: next time you see a hospital brag about its Press-Ganey scores, remember that at the very minimum it’s meaningless in terms of whether that hospital actually delivers quality care and at the worst Press-Ganey scores correlate negatively with some outcomes. Although we don’t know for sure yet whether patient satisfaction correlates with outcomes in CAM/IM, the medical literature suggests that it very likely will not.CAM/IM, of course, is nothing if not the philosophy of “keeping the customer satisfied” to a whole new level in medicine. Indeed, that is its only purpose. CAM proponents think this is a good thing. However, evidence from conventional, science-based medicine suggests that it very well isn’t. None of this is to say that we should revert back to a paternalistic, doctor knows best” approach. It is, however, an indication that it is not the job of doctors to “keep the customer satisfied.” Ideally, we should do that in partnership with patients, not dictating to them what they need the way we did in the old days. However, there are dangers in going too far in the other direction. Patients need a doctor, not someone whose primary consideration is to satisfy them.

It is in general (mostly) a good thing that we are getting away from the paternalism and “doctor knows best” attitude that predominated even as recently as when I was in medical school and moving towards a much more collaborative model of the doctor-patient relationship. There are, however, risks and a price. The potential price is the probability that “giving the people what they want” is not the same thing as giving patients what they need. I think the Rolling Stones had a very good line to describe the essence of the diverging goals of patient satisfaction and patient care.

Posted in: Clinical Trials, Diagnostic tests & procedures, Epidemiology

Leave a Comment (25) ↓

25 thoughts on “Keeping the customer satisfied

  1. pmoran says:

    Interesting stuff, but frustrating trying to interpret it. It cries out for some delving into the causes of death of the different satisfaction subgroups. That might tell us something important, since presumably most of them were involved with mainstream care.

    Patient satisfaction IS disconnected from outcomes. Cancer quacks are adored even when the patient died; “he was kind, and tried so hard”.

    One of the earliest studies I encountered when getting involved in healthfraud activities involved chiropractic manipulation for back pain. The only significant finding was that patients expressed something like 30% more satisfaction with chiropractic manipulation than with conventional care, yet patient did not get back to work quicker.

  2. weing says:

    Was Michael Jackson satisfied?

  3. marcus welby says:

    An enormously nuanced post which captures the essential “raison d’etre” of CAM. Patients want the healer to “do something” and they like “to see you sweat”.

  4. BillyJoe says:

    pmoran: “The only significant finding was that patients expressed something like 30% more satisfaction with chiropractic manipulation than with conventional care, yet patient did not get back to work quicker.”

    Presumably patient satisfaction extended to supplying the desired work exemption certificates.

  5. kathy says:

    “Ideally, we should do that in partnership with patients, not dictating to them what they need the way we did in the old days. However, there are dangers in going too far in the other direction. Patients need a doctor, not someone whose primary consideration is to satisfy them.”

    In trying so hard to get away from paternalism, maybe doctors are swinging to the other extreme. Maybe they and their patients are expecting doctors to exercise maternalism, to become their idealised (not realistic) Mother, who never says, “No!” … who feeds on demand, lets you put your feet up on the couch, kisses your cut knee better, and has a solution to every problem.

  6. cervantes says:

    Dr. G, I’m sorry to have to tell you that you have misinterpreted the concept of the odds ratio. An odds ratio of 1.26 does not mean that someone has 26% higher probability of something occurring, that would be the relative risk. You can’t actually compute the risk from the odds ratio without knowing the baseline rate. This article explains it. Briefly:

    “It’s easy to mistake the odds ratio with relative risk; check out our piece on how the media misquoted odds ratios with regard to IVF treatment and acupuncture. Both the odds ratio and the relative risk have the benefit that they compare two groups and tell you something about the likelihood of something happening to one group compared to another. Both of them also have the property that if the answer is “1” then the event is equally likely for both groups, if the ratio is higher than 1 then the event with probability p is more likely to occur, and if the ratio is lower than 1 then the event with probability q is more likely to happen.

    But the odds ratio and the relative risk can have very different numbers in certain circumstances. If an event is highly likely to happen, or the initial risk of something is high, the odds ratio can still be large while the relative risk is not very high. For example, suppose women in a biology class get an A or B about 80 percent of the time (with odds 80/20 = 4) and men get an A or B about 70 percent of the time (with odds 70/30 = 2.3), then the odds ratio of women to men in the course is 4/2.3 = 1.7. But this does not mean that women get As and Bs 70% more frequently than men! Indeed, women are getting As and Bs approximately 14 percent more frequently than the men (80/70=1.14).

    But because the chances of both groups getting good grades to begin with are high, the relative risk and odds ratio diverge significantly. The problem is that when this happens, say in medical research, journalists dramatically over-estimate the risk of something happening by turning the odds ratio into a percentage change in risk. “

  7. David Gorski says:

    While it is always useful to learn from one’s mistakes, in my defense, let me just point out that Dr. Sirovich seems to be using the same interpretation. For example:

    There is, however, reason to question the validity of the inference. One of the primary findings itself raises concern—a 26% mortality excess among the most satisfied patients, an effect size that far exceeds that for all other, more immediate, study outcomes (eg, a 12% excess in hospitalizations).

    More importantly, I could go back and change a couple of sentences to be more precise about odds ratios (in fact, I did, deleting two phrases and slightly changing a sentence ;-) ), and it wouldn’t materially alter the overall point of my post or weaken the rationale and data underlying its conclusions, which was that in this study patient satisfaction did not correlate well with outcomes and that, if anything, this study suggested that patient satisfaction correlated with negative outcomes. That is the message of the post, and the issue of odds ratios doesn’t change that.

    If you wish, later I could add raw numbers to the post, which are discussed in the papers. Or perhaps I’ll add one of the tables from the study. Or not. Doing so might risk getting lost in details that most people don’t care about, something I tend to have a problem with to begin with.

  8. cervantes says:

    Well, Dr. Sirovich is just flat wrong: the odds ratio does not imply a 26% excess mortality.

    I agree, this doesn’t invalidate your overall point but I thought you were here to argue on behalf of science, and that to me means you need to be accurate and apply mathematics correctly. It’s not something to brush off, just because you think most people don’t care about it.

  9. nybgrus says:

    Excellent point cervantes, and very important indeed. In reading your first comment I thought, “Wait a second, Dr. Gorski didn’t really say that!” And then realized that he had already apparently changed the wording.

    To me this demonstrates one of the fundamental issues with scientific discourse these days – one that (IIRC) Prometheus raised some many months ago (as well as some other commenter… Angora Rabbit, perhaps?). Namely that even at higher (educational) levels, statistics and math are poorly understood and applied. I make no claim that I am an expert, but when it is important (i.e. I am trying to publish a paper) I would make sure to become expert enough.

    It would seem that Dr. Sirovich was incorrect. That is, to me, simply unacceptable and should have been caught be peer review. Of course there are practicality issues with that, but I would argue most of them boil down the the frenzy of “publish or perish” – not only does that generate more papers to be reviewed, but the reviewers themselves have the same need and then (I speculate) are at least a little more likely to give a paper a bye for “minor” mistakes (and as we have seen, even not so “minor” ones.)

    However, Dr. Gorski’s error is less egregious. He is actually a busy clinician and researcher doing all this in his spare time. When the main point and argument he is making is exactly the same whether the error on Dr. Sirovich’s part was noticed or not, I can forgive passing on the error. It is something to be learned from however.

    The overall message of this post is clear and resonates well, IMO. In terms of CAM offering a “do something” approach that would increase patient satisfaction just like a useless pelvis XRay, that fits precisely with what we have seen. In fact, it is for that very reason that I have been arguing with PMoran that the utility of placebo responses should be confined to “the mainstream.” Not because of some “turf war” that doesn’t exist, but because there is a fundamental difference in the way of determining the validity of knowledge between medicine and CAM. One, while it fails many times, is rooted in a consistent and relentlessly forward moving approach. The other, while it seems shiny and fun, is rooted in a consistent appeal to “what feels good,” which is exceedingly being demonstrated to almost entirely lack substance. The small and often clinically irrelevant portion of it that does contain the substance is not justified by the remainder of it, nor by the flawed methodology of justification and implementation. So it is not that medicine seeks to co-opt CAM’s utility for purposes of turf, but to take the only tiny nugget of gold which we already knew was there and apply it more consistently ourselves. Just like we don’t need a naturopath to administer dietetic advice, we don’t need a homeopath to administer non-specific interactions and placebo responses.

    I absolutely see the reasons for people seeking CAM. I absolutely see that validity of placebo responses and non-specific interactions between patient and practitioner. I absolutely do not see those as validating the use of CAM.

  10. David Gorski says:

    I agree, this doesn’t invalidate your overall point but I thought you were here to argue on behalf of science, and that to me means you need to be accurate and apply mathematics correctly. It’s not something to brush off, just because you think most people don’t care about it.

    Which is most definitely what I did not do. If giving your criticism the brush-off were my intent, I would have probably ignored your comment, rather than fixing the post. Now that would have been a brush-off. Lighten up, man.

    In any case, I did feel a need to put your comment in proper context by pointing out that your criticism doesn’t in any way invalidate my arguments. Let’s just put it this way. I was every bit as concerned that someone who doesn’t understand what the issue was about might have thought that your criticism somehow weakened the point of my post. In fact, I was just as concerned about that as you obviously were that people might not realize I that I took Sirovich’s interpretation at face value when I should probably have been a little less cavalier. I do not consider mine to be an unreasonable concern or “brushing off” the point.

  11. David Gorski says:

    The small and often clinically irrelevant portion of it that does contain the substance is not justified by the remainder of it, nor by the flawed methodology of justification and implementation.

    I’m going to have to steal this sentence someday. :-)

  12. nybgrus says:

    I’m going to have to steal this sentence someday.

    I’m flattered. :-D

  13. Angora Rabbit says:

    Yes, that was me, Nybgrus. And I want to thank Cervantes for the link. I’ve just sent that web link to my Dietetics students who are preparing oral presentations on a peer-reviewed research paper; they will find it invaluable. Heck, I found it invaluable and freely admit that I hadn’t understood those stats well, a sad state of affairs given how often RR and Odds Ratios are used in the epi lit.

    It’s not surprising that the original authors may have erred as well. We are all overworked and dancing as fast as we can; I can’t imagine how Drs. Gorski, Novella et al. have time to write these. I barely have time to post the occasional comment!

    I can recall a publication two years ago where we (authors) took issue with a reviewer on a stats analysis. Turns out he was right on one, we were right on the other. The good news is we had an editor who was willing to listen to both of us. Which reminds me of another short coming these days but that’s a separate topic! But there’s a very good reason why many biomedical researchers put part of a statistician’s effort on our grants – it is a hugely complex subject. There’s an old joke that if you ask two statistician’s a question, you will get three answers.

    And now I will come back to the original topic…

  14. Angora Rabbit says:

    Yes, that was me, Nybgrus. And I thank Cervantes for the link. I’ve just sent that web link to my Dietetics students who are preparing oral presentations on a peer-reviewed research paper; they will find it invaluable. Heck, I found it invaluable and freely admit that I hadn’t understood those stats well, a sad state of affairs given how often RR and Odds Ratios are used in the epi lit.

    It’s not surprising that the original authors may have erred as well. We are all overworked and dancing as fast as we can; I can’t imagine how Drs. Gorski, Novella et al. have time to write these. I barely have time to post the occasional comment!

    I can recall a publication two years ago where we (authors) took issue with a reviewer on a stats analysis. Turns out he was right on one, we were right on the other. The good news is we had an editor who was willing to listen to both of us. Which reminds me of another short coming these days but that’s a separate topic! But there’s a very good reason why many biomedical researchers put part of a statistician’s effort on our grants – it is a hugely complex subject. There’s an old joke that if you ask two statistician’s a question, you will get three answers.

    And now I will come back to the original topic…

  15. Angora Rabbit says:

    …of customer satisfaction. Seems there’s a parallel with teaching and the popularity scores from student evaluations. Students are happier with an “A” but this doesn’t reflect learning and causes grade inflation, giving what the student’s “want.”

    I think one key in resolving this problem is Dr. Gorski’s key word “collaboration.” In my experience it can be helpful to create an environment of collaboration; how can we jointly work to create the desired outcome? For teaching, I create the collaboration by shifting the focus from the grade (the cure) to the practical, which in my world is “Why are you taking this class?” Of course it’s required and we all know this. (Or of course I’m sick, that’s why I’m here.) Instead they write down what it is they want to learn from the class and how do they expect to use the information. I do take time to review their answers and pitch the materials to their interests, although the material itself doesn’t change. A clinical parallel might be, how can we use our treatment options to achieve as best an outcome as realistically possible.

    I ran into just this situation recently, when a friend with breast cancer was considering alt-med nonsense instead of her physician’s recommendation and asked for advice. We talked about what she wanted, what she feared, and I suggested she go back to her physician and share her concerns with him. The good news is that she did and learned that her preferred treatment option fit well with her desires – once she sat down and thought about what she really wanted from her treatment. This doesn’t work with everyone, but I have found it enormously useful in many aspects of life.

    Apologies for the duplicate post – I was having time-out problems with the website.

  16. ConspicuousCarl says:

    Odds Ratios…

    I think I understand how the Odds Ratio is calculated and what it is not. But what I don’t understand is, why would anyone want to know such a wacky number which is twice removed from reality like that? We aren’t supposed to see 1.26 and think “26% higher”, but what exactly are we supposed to think of it?

    Patient satisfaction…

    This is the kind of thing which worries me about exuberant “reform”. All of the legal challenges to recent legislation are about arguing over religious/moral objections or forcing people to buy something, which I don’t see as being functionally troublesome even if it is technically worth having the legal argument. It seems a lot more dangerous that blunt solutions might be applied to difficult and complicated problems. The fact is, the best solutions we have for a lot of things are not very pleasant, and yet we have the government demanding that hospitals somehow provide that which does not exist. There may be technical-sounding scores involved, but this is really just an attempt to legislate happiness.

    I am probably going to misquote Christopher Hitchens here (surely an expert on things which do not exist), but in one of his last public events he noted, after being asked something about perfection and completeness, that the mere struggle to improve knowledge just a little bit is enough to fill a lifetime so we shouldn’t grade ourselves poorly based on some impossible standard. We are probably several lifetimes away from the kind of results we would like to have in medicine, and whoever invented the rule in question does not get that.

  17. Sastra says:

    I’m not sure exactly how this works into the mix, but all the people I know personally who swear by alternative medicine emphasize how important its “spirituality” aspect is to them. Their naturopaths, homeopaths, and other alternative practitioners apparently talk quite a bit about religion — the fuzzy woo-consciousness kind, from what I can tell. I suspect this raises their overall satisfaction level considerably. They’re now judging whether the treatment has been effective by a pretty loose standard.

  18. DW says:

    “naturopaths, homeopaths, and other alternative practitioners apparently talk quite a bit about religion”

    Religion is frequently the elephant in the living room in these discussions, and in all the wrangling about how to educate the public about science. You’ll never get people to think straight about science and medicine when they are brought up to believe six impossible things before breakfast …

  19. Earthman says:

    “…not vaccinating is rarely in the best interests of the child.”

    In what rare circumstance is not vaccinating in the best interests of the child? Perhaps the boy in the bubble? I am not a medical man and this phrase just has me a little puzzled.

  20. Scott says:

    One example would be when the child is allergic to a vaccine ingredient. Egg allergy for the influenza vaccine, for instance, since those are grown in eggs. Or a sufficiently bad reaction to prior vaccinations. See

    http://www.cdc.gov/vaccines/recs/vac-admin/contraindications-vacc.htm

  21. Earthman says:

    As a professional person I sometimes have to tell a client something they do not want to hear. I did this earlier today when a client really wanted me to say that a piece of land was rubbish, but on my site visit I had found it to be good, and told them so. (It is important in planning law here that poor land is developed in preference to good agricultural land – and my client is a developer).

    Naturally my client was not ‘satisfied’, but I told them the truth, which was the reality of the situation. This is what a professional person has to do.

    To seek maximum client satisfaction would appear to me to be unprofessional, unethical and bad practice. The difference between a professional doctor and a SCAM merchant would therefore be that the MD is responsible to the point where sometimes it hurts, whereas the other has all the ethics of a used car salesman.

  22. Franky says:

    Interesting stuff, but frustrating trying to interpret it. It cries out for some delving into the causes of death of the different satisfaction subgroups. That might tell us something important, since presumably most of them were involved with mainstream care…..

    http://www.jackedup.me/

Comments are closed.