Science-based medicine and improving patient safety and quality of care

The last couple of weeks, I feel as though I may have been slumming a bit. After all, comparatively speaking it’s not that difficult to take on claims that homeopathy benefits fibromyalgia or Oprah Winfrey promoting faith healing quackery. Don’t get me wrong. Taking on such topics is important (otherwise I wouldn’t do it). For one thing, some quackery is so harmful and egregiously anti-science that it needs to be discussed. For another thing, they serve as examples of how even the most obvious quackery can seem plausible. All it takes are the cognitive quirks to which all humans are prone plus a bit of ignorance about what constitutes good scientific evidence to support the efficacy of a given therapy for a given condition.

So let’s move on to something a little more challenging.

Of all the attacks on science-based medicine (SBM), one of the favorite attacks made by its opponents is the claim that SBM is dangerous, that it kills or harms far more people than it helps. An excellent example of this occurred when quackery promoter Joseph Mercola teamed up with fellow quackery promoter Gary Null to write a widely cited article entitled Death by Medicine. Using the famous Institute of Medicine article that estimated deaths from medical errors to be on the order of 50,000 to 100,000 per year, Mercola and Null wove a scary story meant to imply that conventional medicine does far more harm than good. Of course, as our very own Harriet Hall pointed out, they concentrated solely on the harm, which makes it difficult to determine whether the harms truly outweigh the benefits. As Peter Lipson puts it, such arguments are intentionally designed to take our fears and exaggerate them out of all perspective. The idea behind the fallacious arguments used by the likes of Mercola and Null is that, because “conventional” medicine has problems and needs to improve its safety record, the quackery they promote must be a viable alternative to SBM. Yes, that is basically what their arguments boil down to.

The fallacious manner in which advocates for quackery such as Joe Mercola, Mike Adams, and Gary Null use and abuse any shortcoming of SBM that they can find (and, when they can’t find any, make some up) notwithstanding, there is a problem in SBM. Indeed, over the last 10 years or so since the IOM report, reducing the toll due to medical errors has — finally — become an incredibly important issue in medicine. Indeed, I myself have become involved in a state-wide quality improvement initiative in breast cancer as our site’s project director. As a result, I’m being forced to learn more about the nitty-gritty of quality improvement than I had ever thought I would. Combine this with a study published just before the Thanksgiving holiday in the New England Journal of Medicine, and I’m learning that improving care is incredibly difficult. The issues involved are many and tend to involve systems rather than individuals, which is why the solutions often bump up against the individualistic culture to which physicians belong. Moreover, such efforts, like comparative effectiveness research (CER), tend to earn less prestige than scientific research because, like CER, quality improvement initiatives do not in general look for new information and scientific understanding but rather at how we apply what we already know.

Quality versus safety

Another thing that needs to be remembered is that there is a great deal of controversy over which outcomes should be measured as indicators of quality care, particularly now that third party payors are supporting “quality improvement” initiatives. However, if you look at what is being measured, more frequently they are far more measures of safety (or, more properly), lack of safety than measurements of quality. In other words, what is being measured with a view towards minimization are error rates: medication errors; wrong site surgery rates; failure to adhere to well-established science-based guidelines, such as perioperative antibiotics, adequate sterile technique, and the use of deep venous thrombosis prophylaxis in appropriate patients; and, in general, anything that puts patient safety at risk based on what we know already. We already know that these practices decrease complication rates. Administering appropriate perioperative antibiotics and not allowing the patient’s body temperature to fall too far during surgery, for example, both decrease postoperative infection rates, as has been documented extensively through numerous studies. Leaving urinary catheters in longer than they are absolutely necessary increases rates of urinary tract infections. Failing to get patients out of bed as soon as possible after routine abdominal surgery increases the rates of postoperative pneumonia and deep venous thromboses. And so it goes. Some of these result in outcomes that, if proper systemic measures are in place to prevent them, should be very, very close to zero:

The concept of never events sounds appealing. Certain things should never happen. There are some never events. The surgeon should never remove the wrong leg or take out the wrong kidney. They should never leave an instrument in your abdomen after surgery. Patients should never receive another patient’s medications.

Other outcomes besides these are coming to be defined as “never events,” outcomes such as catheter-associated infections or leaving surgical instruments or sponges behind in a patient’s abdomen. Indeed, Medicare is no longer paying for care associated with several of these so-called “never events.”

Quality, on the other hand, is a more difficult concept to define, as Dr. Robert Centor describes. I wouldn’t go so far as to call the definition of quality “nebulous,” as he does. To me, that characterization goes too far. Rather, I would simply describe quality as being an concept where there is a lot more disagreement over what, exactly, constitutes “quality” care. True, safe care is definitely part of quality. After all, how could anyone define unsafe care as being “high quality”? In other words, safety is a necessary but not sufficient requirement for medical care to be considered high quality. For example, Dr. Centor further elaborates:

I am happy to see safety rates reported, e.g. central line infection rates, PICC line clot rates and or infection rates. These differ greatly from performance measures like % of MI patients given beta blockers. True safety parameters are not influenced by patient factors. We should minimize line infections through system work. We should decrease urinary tract infection rates. My distinction is that we can impact the untoward consequences of the things we do TO patients. We can agree on these, and fixing these problems is JOB #1.

After that, defining quality gets more difficult. Does it mean following more global science-based guidelines, such as the National Comprehensive Cancer Network‘s (NCCN) or the American Cancer Society‘s guidelines for the care of various cancers. These are different from simple quality measures in that they represent treatment algorithms designed more to optimize the overall care of cancer rather than relatively simple practices designed to protect patient safety. For example, one of the NCCN’s treatment guidelines for breast cancer requires that radiation therapy be offered to women who have undergone breast-conserving therapy within a certain time frame and that women with certain types and sizes of tumor be offered adjuvant chemotherapy. For purposes of this post, I will be sticking with safety measurements for the most part, because such measures are much more concrete and less contentious.

Checklists: Simple but overlooked

One of the most important points about quality improvement initiatives is that in general they emphasize doing what SBM already tells us to be the best practices. Examples include practices as obvious as giving perioperative antibiotics at the right time before operations and for the right operations. (For example, “clean” operations generally don’t need perioperative antibiotics.) One way to look at this is to give the right drug for the right type of operation at the right time before the operation until the right time after the operation. For most operations requiring perioperative antibiotics, the correct time to give the dose is a sufficient length of time to achieve therapeutic blood levels before the incision is made. If you administer the antibiotics after the incision has been made, you might as well not give them at all; at that point they don’t work to decrease the rate of infection.

One area that has received a fair amount of attention lately is the issue of infections and sepsis due to indwelling central venous catheters. Central venous catheters are catheters inserted directly into one of the large veins that drain to the heart, such as the internal jugular or subclavian vein. There, they frequently remain for days, sometimes for weeks. Because they are foreign objects that penetrate the skin and sit in large veins for (sometimes) long periods of time, central venous catheters can be the nidus for infection, and catheter-associated sepsis has traditionally been a difficult problem. Over the years, various practices have been advocated for reducing the rate of line-associated sepsis, including changing the line every three days and various other practices. As Atul Gawande pointed out in a famous New York Times editorial three years ago, however, it turns out that the most effective method for reducing catheter-associated sepsis:

A year ago, researchers at Johns Hopkins University published the results of a program that instituted in nearly every intensive care unit in Michigan a simple five-step checklist designed to prevent certain hospital infections. It reminds doctors to make sure, for example, that before putting large intravenous lines into patients, they actually wash their hands and don a sterile gown and gloves.

The results were stunning. Within three months, the rate of bloodstream infections from these I.V. lines fell by two-thirds. The average I.C.U. cut its infection rate from 4 percent to zero. Over 18 months, the program saved more than 1,500 lives and nearly $200 million.

Amazing. Just making sure to do an operating room-style scrub and then donning sterile gowns and gloves has an amazing effect on infection rates. Who knew? Actually, we all did–or, at least we should have.The problem was that surgeons inserting these catheters weren’t doing what they should have been doing. The reasons for this were probably multiple: Lack of time, the “it’s the way I was trained” excuse, and an inflated sense of how well each doctors were doing sterile technique, for example. As I put it at the time when I read Dr. Gawande’s article and the associated study, this is remedial Infection Control 101 for dummies. (Actually, I used a different word than “dummies.”)

This is the sort of thing that is meant by “safety,” namely doing very simple things that SBM tells us that we should be doing anyway. (The original study can be found here, for those interested.)

Increasingly, medicine is taking a page from the airline industry and instituting these checklists. Another study in which a checklist appears to have had salutary effects on patient safety was published by Dr. Gawande last year in the NEJM. The checklist tested involved some fairly basic things. It was divided into three sections. The “sign-in” section had typical elements, such as verifying patient identity; making sure that surgical site is properly marked with a marking pen; making everyone aware of any patient allergies; checking the pulse oximeter; and determining if there is a risk of aspiration or significant blood loss. The preop “time out” involved such things as once again verifying the patient identity and surgical site; making sure that any preop antibiotics have been given less than 60 minutes prior to skin incision; confirming that all appropriate imaging results were available, correct, and show what they are reported to have shown; and reviewing anticipated critical events. Finally, the “sign out” phase involved the typical surgical necessity of making sure that the sponge and instrument counts were correct before waking the patient up; checking any specimens to make sure they were labeled correctly; and reviewing aloud concerns about any issues that might interfere with the recovery of the patient.

These measures were implemented at eight hospitals scattered throughout the world, both in developed countries and Third World countries. Sites included London, Seattle, Toronto, Tanzania, India, Jordan, the Philippines, and New Zealand. These hospitals had varying safety measures in place before the study, and none of them were as extensive as the checklist introduced as part of the study. The results are summarized in the following table (click to enlarge):

As you can see, the results were quite striking. But was it the introduction of checklists that resulted in this improvement? That’s difficult to say definitively from just this study. Certainly it is possible, but exactly how is harder to ascertain. One possibility is that the introduction of such checklists leads to a systemic improvement in how care is delivered driven by adherence to the principles embodied in the checklist. As the authors speculate in the discussion:

Whereas the evidence of improvement in surgical outcomes is substantial and robust, the exact mechanism of improvement is less clear and most likely multifactorial. Use of the checklist involved both changes in systems and changes in the behavior of individual surgical teams. To implement the checklist, all sites had to introduce a formal pause in care during surgery for preoperative team introductions and briefings and postoperative debriefings, team practices that have previously been shown to be associated with improved safety processes and attitudes and with a rate of complications and death reduced by as much as 80%. The philosophy of ensuring the correct identity of the patient and site through preoperative site marking, oral confirmation in the operating room, and other measures proved to be new to most of the study hospitals.

One interesting aspect of this initiative (interesting to me, at least) is the level of hostility to the results of the catheter study and this study that I encountered after they first came out. Doctors tend to resist checklists. I don’t know if this is anything unique to physicians, but given that airline pilots have used preflight checklists for years I’ve always had a hard time understanding the visceral reaction that such lists seem to cause in physicians. It’s as though too many of us consider ourselves so superior to our fellow human beings that the utility of such lists do not apply to us, we never forget to do anything, and never need reminding to do what we should be doing in the first place. To some extent, I can understand this reaction in that I used to make fun of what until about three years ago I used to disparage as the “mindless ritual” of my having to mark which breast I was going to operate on and to do a “time out” to verify it with the entire surgical staff in my operating room. I even used to say that once something is made into a rigid policy or checklist, it’s an excuse to stop thinking and mindlessly go through the motions. To some extent, I still think it’s important to guard against that tendency, but I’m far more supportive of checklists than I used to be.

In any case, the resistance of physicians to these sorts of checklists and various measurements that I mentioned at the beginning of this post appears to be due at least partially to the individualistic culture of physicians, but it should also be pointed out that malpractice concerns also appear to play a role. Indeed, sometimes this focus on the skills of the individual practitioner leads groups of physicians to resist even evidence-based medicine. In order to figure out how to prevent events that threaten patient safety, it is necessary to have a free and frank discussion of how such events occur, just as airlines do after crashes and near-misses.

But are checklists and safety measures having an effect?

While I’m sure most, if not all of us, can agree that this relatively newfound emphasis on improving patient safety and quality of care through guidelines, checklists, and other methods designed to ensure that, at the very least, the simple things that we know to be effective at improving patient outcomes are in fact being done, is long overdue, we still have to evaluate whether or not these measures actually work. In other words, are the rates of medical errors decreasing? Over the last couple of months, there have been two studies that are depressing in that they imply that the answer to that question might well be, “No.”

First, let’s take a look at one specific “never event,” namely wrong site or wrong patient surgeries. A recent study from the University of Colorado published in October in the Archives of Surgery, for example, found that wrong site surgeries, although rare, are more common than previous estimates, accounting for approximately 0.5% of all medical mistakes noted in the study. In a database of 27,370 physician self-reported adverse occurrences during a time period spanning January 1, 2002, to June 1, 2008, investigators identified 25 wrong patient and 107 wrong site surgeries. Significant harm was reportedly inflicted in 5 wrong-patient procedures (20.0%) and 38 wrong-site procedures (35.5%), and one patient died secondary to a wrong-site procedure (0.9%). Root cause analysis showed that most wrong-patient procedures were due to errors in diagnosis (56.0%) and errors in communication (100%), whereas wrong-site occurrences were related to errors in judgment (85.0%) and the lack of performing a “time-out” (72.0%). It should be pointed out that specialties were involved in the cause of wrong-patient procedures and contributed equally with surgical disciplines to adverse outcome related to wrong-site occurrences.

It should be also pointed out that there isn’t much in the way of convincing evidence that the Universal Protocol is currently helping.

Finally, as I mentioned above, right there in the New England Journal of Medicine last week was a study that specifically looked at trends over time for medical errors that caused patient injury. In brief, investigators at Harvard and Stanford Universities retrospectively examined the rate of medical errors at ten North Carolina hospitals from 2002 to 2007, chosen for this reason, as described in the introduction:

We chose North Carolina as a site that was likely to have improvement, since it had shown a high level of engagement in efforts to improve patient safety, including a 96% rate of hospital enrollment in a previous national improvement campaign, as compared with an average rate of 78% in other states, and extensive participation in statewide safety training programs and improvement collaboratives.

So what did these investigators find? Let’s just put it this way: This is one case where a picture is worth the proverbial thousand words:

This is Figure 2 from the study. As you can see, there is precious little, if any, improvement in the incidence of harm due to preventable medical errors, at least over the period studied from 2002 to 2007.

Where now?

One might conclude from the previous two studies that I presented that thus far all the safety/quality improvement modalities that have been implemented over the decade since the IOM report have been useless. However, although I do believe that boosters of checklists and various other safety improvement measures probably oversell their ability to make a dent in the rate of medical errors in American hospitals, I also don’t subscribe to the nihilistic view that none of this does any good. For one thing, it is very clear that in certain areas, carefully targeted checklists and other interventions can result in decreases in errors. However, medical errors span many specialties and many institutions, which means that, although, for example, central line checklists might very well greatly decrease the incidence of catheter-associated sepsis, such errors might represent a small enough percentage of overall medical error-associated harm that even huge decreases in such a complication don’t yet show up above the random noise and year-to-year fluctuations in the incidence of harm due to medical errors. Wrong-site surgeries, for instance, although catastrophic “never events,” are still, thankfully, rare; eliminating them, as important as that is, would not be expected to decrease overall rates of harm from medical errors by much. Unfortunately, we don’t appear even to be making much headway in this area.

Alternatively, one can argue, as the authors of these last two studies do, that we haven’t yet gone far enough in our safety interventions. For example, measures designed to prevent wrong site and wrong patient surgery are only implemented in the operating rooms:

But these protocols “are not sufficient,” Stahel says. They only apply to the operating room, he says, and nearly one-third of the botched procedures in the study took place in doctor’s offices. Moreover, the study showed that many operating-room mistakes start out in biopsy labs or during the imaging and diagnosis process.

“A lot of wrong-side, wrong-patient errors occur outside of the operating room,” Stahel says. “We should have time-outs for labeling for samples. If the lab mixes up the sample, the consequences may be worse than erroneously cutting off the wrong leg. I think we should extrapolate time-outs to internal medicine [and] laboratories.”

In the discussion of the NEJM study, the authors comment:

Our findings validate concern raised by patient-safety experts in the United States1 and Europe1 that harm resulting from medical care remains very common. Though disappointing, the absence of apparent improvement is not entirely surprising. Despite substantial resource allocation and efforts to draw attention to the patient-safety epidemic on the part of government agencies, health care regulators, and private organizations, the penetration of evidence-based safety practices has been quite modest. For example, only 1.5% of hospitals in the United States have implemented a comprehensive system of electronic medical records, and only 9.1% have even basic electronic record keeping in place; only 17% have computerized provider order entry.13 Physicians-in-training and nurses alike routinely work hours in excess of those proven to be safe. Compliance with even simple interventions such as hand washing is poor in many centers.

A reliable measurement strategy is required to determine whether efforts to enhance safety are resulting in overall improvements in care, either locally or more broadly. Most medical centers continue to depend on voluntary reporting to track institutional safety, despite repeated studies showing the inadequacy of such reporting. The patient-safety indicators of the AHRQ are susceptible to variations in coding practices, and many of the measures have limited sensitivity and specificity. Recent studies have shown that the trigger tool has very high specificity, high reliability, and higher sensitivity than other methods. Manual use of the trigger tool is labor-intensive, but as electronic medical records become more widespread, automating trigger detection could substantially decrease the time required to use this surveillance tool.

In other words, our ability to detect medical errors in retrospective studies is maddeningly inadequate, there is no standardization in how errors are reported or even mandatory reporting of such errors, and the efforts we have implemented have been largely confined to hospitals, when most patient receive care in the outpatient setting, including increasingly invasive procedures. Clearly, much more needs to be done. Checklists, such as the Universal Protocol, can only be part of the solution.

As disappointing as this is, I do find reason for optimism. Increasingly, regulatory agencies and third party payors are requiring the implementation of measures to improve patient safety. The problem is that, with the exception of some very obvious interventions, such as requiring hand-washing and sterile gowning before inserting central lines, some of these interventions have not been as well studied as they should be. For instance, it is widely claimed that the adoption of electronic medical records will allow for greater patient safety through the automation of checklists and various other safety checks. It is not yet clear that this will be true as EMRs are more widely adopted.

It should not be surprising that decreasing the rate of medical errors and the harm they cause does not lend itself to easy answers. The reasons for medical errors are many and multifarious, whether you accept the IOM’s estimates for the number of deaths caused by them or consider it flawed and likely an overestimate or an estimate without the context of what the expected death rate in hospitals would be in the absence of medical errors. Solving the problem and minimizing medical errors that result in harm will similarly not yield to a single strategy; our attacks on this problem will have to be as multi-pronged as the problem.

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (25) ↓

25 thoughts on “Science-based medicine and improving patient safety and quality of care

  1. BillyJoe says:

    I often wonder how it is possible to remove the wrong limb. Presumably there is something terribly wrong with the limb to be removed (eg gangrene) whilst the other limb is presumably not so affected. Wouldn’t you just look at the limbs to confirm which one is diseased (eg gangrenous)?

    In any case, if I’m ever in this position of such a patient, I’m going to mark my diseased limb myself. And, just in case, I’m going to mark the good limb “NO, NOT THIS ONE YOU IDIOT!”

    (On second thoughts, perhaps I’ll leave off the last two words.)

  2. daedalus2u says:

    I suspect that the reason airplane pilots are ok with checklists is that they know the state of the plane is extremely important, and they also know that they didn’t do every bit of maintenance on the plane that is important and they don’t have the expertise to do every bit of maintenance on the plane that is important. If they want to know that the engine is ok, they need a report from the engine mechanic that checked the engine to make sure it is ok and they have to rely on that report which depends on the mechanic’s expertise.

    I suspect that surgeons feel they do have the expertise to do everything that is necessary to be done to and for the patient prior to surgery. They know how to prep the patient, and so when they imagine the patient being prepped, they imagine it being done the way they would do it, which is 100.0000% perfectly and without flaws.

    The airplane pilot doesn’t know how to rebuild an engine, so he doesn’t have a way to imagine it being done, all he can do is look at the checklist and rely on the expertise of the mechanic. I suspect that back when airplane pilots did do all their own maintenance, they would have thought the idea of a checklist was not necessary too.

  3. David Gorski says:

    Also, I can’t help but think that pilots are OK with checklists because, if the plane goes down, they’ll be just as dead as the passengers. :-)

  4. DevoutCatalyst says:

    Yes, just as dead, and you can’t get much deader than that.

    Pilots like to watch cool videos with titles such as The 17 Most Popular Ways to Fall Out of the Sky. Pilot error remains very real in a realm unforgiving of error.

  5. qetzal says:


    That’s an interesting perspective, but I could also see it working the opposite way. If surgeons have as much ego as they’re always accused of, I’d expect them to assume that no one else could do things as well as they did. Hence, that prep work done by anyone else was likely to be inferior and would need to be checked for flaws by the surgeon. (Of course, that wouldn’t convince surgeons to apply checklists to themselves, only to everyone else!)

    I like Dr. Gorski’s explanation. One could even give it a Darwinian slant: if pilots who didn’t use checklists were more likely to die (especially in the early days of flight), then pilots who did use checklists had a competitive advantage in filling the niche and training the next generation of pilots (the ‘offspring,’ so to speak). :)

    More seriously, I agree there should be a zero tolerance philosophy regarding wrong patient/wrong site surgeries. But it’s not clear to me how often this happens. Stahel et al. report 132 wrong patient or wrong site surgeries from a data base of 27,370 adverse occurrences. Did they say how many total surgeries were covered in their data set?

  6. David Gorski – Thanks for the article. This is one of my favorite kind of articles on SBM, a very nuanced look at the rewards and difficulties in improving safety in medicine.

    “Also, I can’t help but think that pilots are OK with checklists because, if the plane goes down, they’ll be just as dead as the passengers.”

    Yes, thank you for beating me to the punch on that one. :) I have a couple family members who are pilots. Another difference might be, Pilots start their training out with checklists. Most pilot’s I’ve meet seem to regard checklists as their friend. Part of doctor’s resistance may be merely a general resistance to change.

    I’d also add that it seems to me, commercial and to some extent private pilots willingly put themselves under more surveillance than a doctor typically does. Because pilots are working within a shared space, navigation, take-off landing errors may endanger other planes. They must ask for permission for many actions.

    Just as an aside, many pilots are also are required to undergo periodic simulator training, testing. This enables them to practice at emergency events that would be to dangerous to practice in real life. Recently I heard about a project in Michigan that is designing a training simulation system for cardiac surgery. Interesting stuff.

  7. daedalus2u says:

    The problem with a “zero tolerance” approach to mistakes is that it is unachievable, and puts the wrong emphasis on finding and punishing mistakes rather than in generating a process by which the potential for mistakes and faults is minimized. Quality of a process has to be built in to the process, it can’t be inspected in at the end. You can’t know that you have a “zero mistakes” process until after the fact, and then it is too late because you have already had a mistake (or not).

    Take the recent “mistake” of the engine blowing up on the recent flight. A bad “mistake”, but because the plane had multiple engines, the bad mistake didn’t lead to loss of life. The “process” of flying through the air with a heavier-than-air machine with engines that can blow up was made to be fault tolerant and a fatal accident was prevented because the process was made fault tolerant to engine blow up by using multiple engines.

    In trying to prevent future engine failure mistakes, you don’t just punish the individuals responsible for the engine blowing up. You try to fix the process by which engines blowing up and almost bringing down a plane happened. That process includes many non-engine systems. Punishing those responsible for the systems that fail compels them to devote resources to blaming failures on systems they are not responsible for.

    In the case of preventing infections, there are multiple lines of prevention, but because none of those lines of prevention are perfect, and none can be expected to be perfect (because bacteria are everywhere and are unavoidably on every patient and on every clinician). Some of the methods to reduce the chance of infection also can cause infections. A major cause of infection is the knocking out of normal flora by treating a first infection with antibiotics. Normal flora prevent many infections by suppressing pathogens. Antibiotics don’t work against resistant organisms. All antibiotic use necessarily results in bacteria being exposed to sub-inhibitory antibiotic levels and so necessarily can lead to increased antibiotic resistance. I have a technique which might help by lowering the background levels of potential pathogens.

    I appreciate this would not be a panacea, but I think it would help some. Would it help as much as a checklist? Probably not. If it was added to procedures with or without a checklist would it help some? Probably yes in both cases. Probably more in procedures without a checklist. The major improvement in procedures with a checklist might be in reducing the background or resistant bacteria. Even if the infection rate didn’t change, but the infections were with antibiotic sensitive organisms, that would be a big improvement, but an improvement that might be hard to document.

    The methodolatry of EBM has reared its ugly head in infection control too. When a hospital tried to test to see if adding checklists did help, that was considered to be “research” and so required prospective “informed consent” of all participating patients and clinicians. I think there was an SBM post on that episode.

  8. Harriet Hall says:

    It is illogical to use medical errors to criticize SBM. It is not SBM itself that is at fault; it is the implementation of SBM by fallible humans. A serious problem, but a separate issue.

    Pilots do indeed use checklists, but they become so routine that it is easy to check something off without actually thinking or observing. If you expect the fuel gauge to read “full” it is easy to glance at it and think you saw “full” when it actually read “empty.” We misperceive because we see what we expect to see even when it’s not there. I remember reading an account of an accident where the landing gear failed to deploy and the pilot and co-pilot had followed the checklist and managed to miss all the indications including a loud “gear-up” audio warning in the cockpit.

    The most ingenious accident-prevention strategy I ever observed was when the squadron commander flew with a subordinate as co-pilot and told him “I’m going to deliberately make a mistake on this flight and I want you to pay close attention and tell me immediately when that happens; this is a test for you.” and then the commander would do his best NOT to make any mistakes. This kept the subordinate alert and removed the natural reluctance to question the actions of a superior.

  9. David Gorski says:

    It is illogical to use medical errors to criticize SBM. It is not SBM itself that is at fault; it is the implementation of SBM by fallible humans. A serious problem, but a separate issue.

    Well, yes and no. Your point is taken, and, of course, I wasn’t criticizing SBM because of medical errors. That being said, developing strategies to alter the system in order to make medical errors much less frequent is nonetheless definitely something that falls within the purview of SBM, and, quite frankly, we haven’t been doing a very good job of it for a very long time. The results of research into quality improvement initiatives is quite clear that most medical errors tend to be due to more systemic issues rather than individual culpability, although frequently both are at play. The key is to build a system in which serious errors are identified and intercepted before they are made without building a system so cumbersome that it is impractical.

    Your skepticism about airline checklists notwithstanding, the evidence is quite compelling that the airline industry makes far, far fewer serious, life-threatening errors than we do in medicine. It’s not even in the same ballpark; it’s orders of magnitude lower. Checklists are only part of the reason. We would also do well to study about how the airline industry has in general built a culture that emphasizes quality and accountability. Not everything will be translatable to medicine, but a surprising amount of it is.

    There is one exception to the observation that far more people die of medical errors than die in plane crashes, and Kimball will be quite happy to hear it even if he doesn’t know it already. That’s anesthesia-related deaths, which are almost as low on a per million basis as airline fatalities are. Clearly, at least one area of medicine is doing something right.

  10. David Gorski says:

    The problem with a “zero tolerance” approach to mistakes is that it is unachievable, and puts the wrong emphasis on finding and punishing mistakes rather than in generating a process by which the potential for mistakes and faults is minimized. Quality of a process has to be built in to the process, it can’t be inspected in at the end.

    I actually mostly agree with this. For instance, one Medicare-mandated “never event” is the development of a decubitus ulcer. While it’s true that decubitus ulcers are almost always avoidable with proper patient placement and movement coupled with careful skin inspection and care, nothing is 100%. It should be possible to reduce the incidence of decubitus ulcers to a very low level, but it’s not possible to reduce them to zero. Yet that is what Medicare is demanding and, in fact, Medicare will no longer pay for the care of decubitus ulcers that develop during a hospital stay.

  11. BillyJoe says:

    “The key is to build a system in which serious errors are identified and intercepted before they are made without building a system so cumbersome that it is impractical.”

    Like a checklist I once received back with my car after servicing by a mechanic. The number of items in the list was staggering. I would have been happy with just “brakes”, “sparkplugs” and “headlights” because I would then have felt comfortable that these items had actually been checked instead of just checked off.

  12. Harriet Hall says:

    I’m totally in favor of checklists. I just wanted to stress that they can lead to complacency and a false sense of security. A checklist is only as good as the fallible human doing the checking. Maybe what we need is a checklist to make sure the checklists are being properly implemented. :-) Or maybe we need a way to apply the squadron commander’s tactics to a medical setting.

  13. I agree with daedalus2u that the reliance on machines is the most important reason that pilots have not been averse to using checklists (I’d add that many commercial pilots, perhaps most in the 10-20 years after WWII, got their training in the military, a culture that is pro-checklist and anti-individualist). In medicine, anesthesia has had its own checklists for decades, mainly having to do with the anesthesia machine, and has had little trouble joining the recent movement for more generalized uses of checklists.

    There have been medical simulators for about two decades, and they are great. These also began with anesthesiology, which recognized its similarities with aviation. Look here for a short history. My only complaint about simulator training is that we don’t do it frequently enough. As chance would have it, I had my once-every-three years session just today; I think we should do it every 6 months.

  14. S.C. former shruggie says:

    I’d expect that unless these practices are present while doctors are in medical school, for some doctors they’d be like diets – strictly adhered to at the start, and then falling out of use.

    I know for myself as a student who switched from a slow-paced, undemanding arts program to a much more demanding biology program, I try to adopt the study habits of the students who started in biology or nursing with ambitions of entering grad school or medical school, but I always fall out of them again because they aren’t old ingrained habit for me.

    If I knew how to make people stick to new adopted behaviours, I’d have paid off my defense-budget sized student loans debt selling books to dieters and smokers by now.

    Rats given subcutaneous injection of capsaicin show less pain response and licking of injection site if there’s a pair of cardboard eyes staring at them to scare them. Maybe you could bring in new scary cardboard cut-out regulatory figures to scare physicians and renew the Hawthorne effect?

  15. Always Curious says:

    Very nice article & informative! It’s interesting that the graphs presented above consistently showed Internal Reviewers being harsher than External Reviewers. I wonder if that same thing might be coloring the self-reported database?

    Even though many of us tend to be our own harshest critic, I feel that the medical feel could benefit from better formalized or structured oversight. It seems that the current structure of the medical field allows problem doctors too much ability to sidestep, escape or otherwise disguise their mistakes. Or perhaps that’s mostly media hype?

  16. rwdrwd says:

    The IOM, and subsequently the press, spun the data on medical errors out of proportion. Four months after the IOM report one of the lead authors of the Harvard study on which it was based published a repudiation of sorts, indicating that their findings were spun the wrong way:

    Others published similar criticisms, but the horse was already out of the barn.

  17. Others published similar criticisms, but the horse was already out of the barn.

    For a summary of those criticisms, look here under The “Third Leading Cause of Death” Myth.

  18. qetzal says:

    A checklist is only as good as the fallible human doing the checking. Maybe what we need is a checklist to make sure the checklists are being properly implemented.

    You know how they deal with that problem under Good Manufacturing Practices (GMPs) for drugs, right? One person goes through the ‘checklist’ (actually, a batch or test record) and initials each step as it’s completed. A second person (the verifier) watches the first person, and also initials after each step to verify that the first person really did what their initials suggest!

    I’m just waiting for the day that FDA says we need a third person to confirm the second person’s verification!

    But to be fair, FDA is completely on board with designing quality into the process, rather than trying to test for it at the end. They are also very committed to root cause analysis after any kind of batch or test failure, as well as corrective and preventative actions (CAPAs) to reduce similar problems in the future. Some of what they require is pretty over-the-top, but there’s still a lot in the GMPs that might be usefully applied to minimizing medical errors.

  19. David Gorski “Yet that is what Medicare is demanding and, in fact, Medicare will no longer pay for the care of decubitus ulcers that develop during a hospital stay.”

    I wonder if this is an aspect of the attempt to get away from fee for service care and move toward a fee for results that I have heard about in the news over the last few years. So perhaps the intent is not meant to punish so much as remove an unintended financial incentive for lower quality care, that might actually undermine Doctor’s efforts to improve.

    If I am recalling correctly, I believe I heard and interview with Atul Gawande (NPR) where he used an example of a Childrens’ Hospital out east that implemented the use of checklists for treatment of children with asthma. The good news was that the use of the checklist was correlated to a reduction of ER visits for asthma at the hospital. The bad new was that ER visits for asthma were a significant part of the hospital revenue. Oh, here’s a link were he writes on the topic.

  20. rwdrwd says:

    “No pay for adverse events” is a misconception. Medicare stopped paying for the actual care patients received for ANY inpatient care 26 years ago. Fee for service hospital payments by Medicare were eliminated in 1984 with the advent of DRGs. Hospitals lose money on many Medicare admissions even with delivery of exemplary care.

    Before the no pay policy was implemented in 2008 hospitals could modify the DRG code for the adverse events in question, but even then the modified payment often fell short of the added expenses.

  21. Scott says:

    I recently had surgery and it was interesting to see these issues from the other side. At certain points during the admission/prep process it was clear that the various folks involved were following a checklist. But the one that stands out for me was that I was required to verbally confirm to different people what surgery was to be performed, no fewer than 6 times that I remember – first at preregistration the week before, and last right before I was wheeled off to the OR.

    From reading here I understood that it was part of a process to make absolutely sure that the right surgery was done on the right patient, but if I hadn’t known that I probably would have gotten quite annoyed at the repetition.

    Effective communication with the patient, not just the medical team, will be important.

  22. Zetetic says:

    Is the checklist concept to promote quality of care and improve outcomes really new? Maybe for physicians! Nursing has use them for literally decades. I was involved with developing “Critical Pathways” treatment plans for certain diagnoses at a hospital in San Diego 20 years ago.

  23. raisenberg says:

    A little late here but time and incentives may help us understand the different ways doctors and pilots might approach a checklist:
    Doctors are frequently very busy, their hours already packed tightly so they may perceive the time spent on a checklist as a slowdown. They have incentives to use their time efficiently. If they are in private practice they likely are motivated to increase volume. In an academic center they may need to get back to clinic etc.

    Pilots don’t have the same type of issues. The checklist is part of the flight preparation and by skipping it the flight probably wouldn’t take off much earlier.

  24. Newcoaster says:

    A very good post on an important topic. (as usual!)

    However, while checklists are obviously useful for surgeries, and invasive procedures like central lines, I’m not sure how applicable they are to the non-surgical fields of medicine. I’m a Canadian family doc, part time ER doc.

    I first heard of military checklists being applied to medicine about 10 years ago. I remember an ER shift where I saw an airforce officer with vague crampy abdominal discomfort. Exam, labs and plain films were essentially normal and consistent with constipation so I discharged him, with the usual “see a doctor if things change or worsen”. He apparently did get better after taking a laxative and never followed up with the flight surgeon before going on his next posting.

    Well about 3 weeks later he got more pain so went to a doctor in the US, and eventually was diagnosed with an appendiceal abscess, which now required a bigger surgery, bigger scar, and longer recovery time. (of interest, he was initially diagnosed with constipation by the surgeon) Naturally….he filed a complaint that I had missed an “obvious diagnosis” of acute appendicitis, and that if only hospitals followed a military type checklist and quality control, that wouldn’t have happened. I never really followed that argument. He seemed to be thinking that there was an algorithm for abdominal pain I should have been following which would lead inevitably to him being given the correct diagnosis on the day I had seen him. But algorithms and cookbook medicine, I think, leads to bad doctors who with a false sense of infallibility. Medicine just ain’t that straightforward, folks.

    (FYI for laypeople….appendicitis isn’t as easy to diagnose as they make it look on TV, and it is not reasonable to put everybody who shows up at ER with a sore tummy into a CT scanner. Well, maybe in the US they do…..but that is just CYA medicine because of Americans love for lawsuits…. Abdominal pain is the ER docs least favorite presenting complaint because it could be anything from bad oysters to an aneurysm that is about to pop)

    But I digress…back to checklists.

    The EMR in my office has a prescription writing program that does a pretty good job of looking for potential drug interactions or doses that are outside the normal range, and won’t print the prescription unless I address the error message. I guess that is a medical checklist of sorts.

    In BC at least, I am overrun with Clinical Practice Guidelines already, which are essentially checklists. (is your diabetic getting A1C checks every 3 months? Are they on an ACE Inhibitor? When did they last see the opthalmologist or the foot care nurse?) and there are efforts to make sure various chronic diseases are being managed in the same way across the province.

    It’s a recognition that there is so much information out there that the average general clinician can’t keep up with the latest information in the journals. But, the CPG are of course based on information that is probably at least 5 years, potentially out of date, and has considerable influence from expert opinion. So I use them with a grain of salt.

Comments are closed.