Rambling Musings on Using the Medical Literature

For those who are new to the blog, I am nobody from nowhere. I am a clinician, taking care of patients with infectious diseases at several hospitals in the Portland area. I am not part of an academic center (although we are affiliated with OHSU and have a medicine residency program). I have not done any research since I was a fellow, 20 years ago. I was an excellent example of the Peter Principal; there was no bench experiment that I could not screw up.

My principal weapon in patient care is the medical literature, accessed throughout the day thanks to Google and PubMed. The medical literature is enormous. There are more than 21,000,000 articles referenced on Pubmed, over a million if the search term ‘infection’ is used, with 45,000 last year.

I probably read as much of the ID literature as any specialist. Preparing for my Puscast podcast, I skim several hundred titles every two weeks, usually select around 80 references of interest and read most of them with varying degrees of depth. Yet I am still sipping at a fire hose of information

The old definition of a specialist is someone who knows more and more about less and less until they everything about nothing. I often feel I know less and less about more and more until someday I will know nothing about everything. Yet I am considered knowledgeable by the American Board of Internal Medicine (ABIM), who wasted huge amounts of my time, a serious chunk of my cash, and who have declared, after years of testing, that I am recertified in my specialty. I am still Board Certified, but the nearly pointless exercise has left me certified bored. But I can rant for hours on Bored Certification and how out of touch with the practice of medicine the ABIM is.

My concept of an expert is a combination of experience and understanding of the literature. I used to say mastery of the literature, but no one can master a beast that large; I am just riding on the Great A’Tuin of medical writings. Experience comes with time, and I have read that it takes 10 years to become competent in a field. Whether true or not, it matches my experience. I remember as a resident reading notes on patients I had cared for as an intern, and being appalled with what an ignorant dufus I was. In my first year of practice I had a patient who died of miliary tuberculosis, and the diagnosis was, unfortunately, made at autopsy. It was an atypical manifestation of a rare (in the US) disease. About a decade later the case was presented as an unknown to a visiting professor; I had completely forgotten the case, but I piped up from the audience to pontificate on how this had to be miliary Tb. Afterwards I was shown the chart and nice documentation as to how clueless I had been a decade earlier. When it comes to being a diagnostician, there is not substitute for experience.

When it comes to treatment? That is where I tell the residents that the three most dangerous words in medicine are ‘In. My. Experience.’ You cannot trust experience when deciding on therapy, especially for relatively unusual diseases. Sometimes I will ask a doc why they use a given antibiotic, usually in a situation where it is being used in a way that is, shall we say, old fashioned. Often the response is “I like it”‘ as if the choice of a drug is like choosing a beer.

I rely on the literature — such as it is, and limited by my lack of an Ethernet jack in my brain — in deciding the best course of therapy for a patient. The literature is always unsatisfactory. That has always been known. Even with the best studies, there is always the issue of wondering if the literature applies to your patient and their particular co-morbidities, and, perhaps, genetics. As an example, it is becoming evident that the literature on the presentation and treatment of Cryptococcus, which is based on the experience with C. neoformans, is not applicable to C. gattii, a new strain of the fungus in the NW. So how to use a literature that may not be totally relevant to my local conditions? I wing it. It is an educated and experienced winging, but winging it I do.

Given the breadth and depth of the literature, it is nice to have systematic reviews, meta-analysis, and guidelines. As a practicing physician, I find them helpful as they provide an overarching understanding, a conceptual framework, for understanding a disease or a treatment. They are the Reader’s Digest abridged version of a topic, and the references are invaluable. Usually most of the relevant literature is collected in these reviews and make it easier, especially in the era of the Googles and on-line references, to find the original literature.

All three have their flaws, and if you are well versed in a field, you recognize the issues and try and compensate.

As was noted in the recent Archives, the literature to support the recommendations of the Infectious Disease Society America are not necessarily based on the best of evidence. Really? I’m shocked. Next up, water is wet, fire is hot, and the Archives confirms the obvious.

Results In the 41 analyzed guidelines, 4218 individual recommendations were found and tabulated. Fourteen percent of the recommendations were classified as level I, 31% as level II, and 55% as level III evidence. Among class A recommendations (good evidence for support), 23% were level I (1 randomized controlled trial) and 37% were based on expert opinion only (level III). Updated guidelines expanded the absolute number of individual recommendations substantially. However, few were due to a sizable increase in level I evidence; most additional recommendations had level II and III evidence.

Conclusions: More than half of the current recommendations of the IDSA are based on level III evidence only. Until more data from well-designed controlled clinical trials become available, physicians should remain cautious when using current guidelines as the sole source guiding patient care decisions.

Big duh. Anyone who is a specialist understands the weaknesses in all guidelines, but we also understand their importance. When I was a fellow, one of my attending was, and still is, one of the foremost experts in the US on Candida, and anothers areas of expertise is S. aureus infections and endocarditis.

Both have spent a career thinking deeply on their respective areas of expertise. You learn that while no one is perfect, the breadth and depth of their knowledge and experience gives their recommendations extra weight. Who would you want at the controls of your plane in a unexpected and unusual weather conditions? An experienced pilot, or someone who spent a few days on the X-Plane simulator? The same with all the guidelines. When someone with a lifetimes of work in a field helps write a guideline, you pay attention to their expertise. You know the recommendations are not necessarily right, but odds are their opinions are better than mine, just as my opinion is usually better than a hospitalist, as least as far as infections are concerned. With residents, I try make a point of differentiating when my recommendation is no better than the next doc, and when my recommendation is the Truth, big T, and based on the best understanding of the literature at the moment.

This attitude, trusting authority, held by many in medicine, goes against the University of Google approach where a day of searching and a quick misreading of the abstracts renders everyone an expert. I wonder if other fields are plagued with these quick pseudo-experts. Law is, when the accused attempt to defend themselves.

I certainly would be in favor of more money being spent on infectious disease research, and, one hopes, infectious disease doctors. In a perfect world, every disease would be subjected to careful, extensive clinical trials and I would know, for example, the best therapy for invasive Aspergillus pneumonia in a neutropenic leukemia patient. Until that time, I am, in part, going to rely on the guidelines written by those who have spent a career thinking about the diseases I have to treat. To quote Dr Powers,

“Guidelines may provide a starting point for searching for information, but they are not the finish line…Evaluating evidence is about assessing probability,” Dr. Powers commented in a news release. “Perhaps the main point we should take from the studies on quality of evidence is to be wary of falling into the trap of ‘cookbook medicine,’” Dr. Powers continues. “Although the evidence and recommendations in guidelines may change across time, providers will always have a need to know how to think about clinical problems, not just what to think.”

I was struck by a recent Medscape headline:

Cochrane Review Stirs Controversy Over Statins in Primary Prevention

Having been irritated of late by Cochrane reviews in my area of expertise, I clicked the link. The first three paragraphs are

A new Cochrane review has provoked controversy by concluding that there is not enough evidence to recommend the widespread use of statins in the primary prevention of heart disease.

The authors of the new Cochrane meta-analysis, led by Dr Fiona Taylor (London School of Hygiene and Tropical Medicine, UK), issued a press release questioning the benefit of statins in primary prevention and suggesting that the previous data showing benefit may have been biased by industry-funded studies. This has led to headlines in many UK newspapers saying that the drugs are being overused and that millions of people are needlessly exposing themselves to potential side effects.

This has angered researchers who have conducted other large statin meta-analyses, who say the drugs are beneficial, even in the lowest-risk individuals, and their risk of side effects is negligible. They maintain that the Cochrane reviewers have misrepresented the data, which they say could have serious negative consequences for many patients currently taking these agents.

Newsweek and The Atlantic both refer to the Cochrane review as a “study.” A review is not what I would consider a study, usually synonymous with a clinical trial. The use of the term makes it sound like the Cochrane folks were doing a clinical trial, patients being randomized to one treatment or another. My sloppy non scientific poll of people (all people in the medical field, but that is who I have contact with) suggests the no one considers a review of clinical trials to be a study. A review of a novel is not the same as writing a novel.

Sloppy and potentially misleading language from major news outlets. What a surprise.

I have always liked meta-analysis for the same reason I like guidelines: they provide an overarching conceptual framework for understanding a topic. But only a fool would make clinical decisions based upon a meta-analysis. Yet, meta-analysis seem to be creeping to the top of the list of the clinical information rankings to be believed.

There are issues with meta-analysis.

The studies included in a meta-analysis are often of suboptimal quality. Many spend time bemoaning the lack of quality studies they are about to stuff into their study grinder. Then, despite knowing that the input is poor quality, the go ahead and make a sausage. The theory, as I said last week, is that if you collect many individual cow pies into one big pile, the manure transmogrifies into gold. I still think it as a case of GIGO: Garbage In, Garbage Out.

It has always been my understanding that a meta-analysis was used in lieu of a quality clinical trial. Once you had a few high quality studies, you could ignore the conclusions of a meta-analysis.

Evaluations of the validity of the conclusions of meta-analysis have demonstrated that the results of a meta-analysis usually fail to predict the results of future good clinical trials. The JREF million is safe from the Cochrane, I suppose. Their conclusions are no more reliable than the studies they collect and are no more valid than the rest of the medical literature.

We identified 12 large randomized, controlled trials and 19 meta-analyses addressing the same questions. For a total of 40 primary and secondary outcomes, agreement between the meta-analyses and the large clinical trials was only fair (kappa= 0.35; 95 percent confidence interval, 0.06 to 0.64). The positive predictive value of the meta-analyses was 68 percent, and the negative predictive value 67 percent. However, the difference in point estimates between the randomized trials and the meta-analyses was statistically significant for only 5 of the 40 comparisons (12 percent). Furthermore, in each case of disagreement a statistically significant effect of treatment was found by one method, whereas no statistically significant effect was found by the other.

Once there was a quality definitive trial or three, the meta-analysis becomes, I thought, moot. A quality clinical trial trumps the meta. I guess. I am not so certain that is the attitude anymore given the freak-out in the media about Cochrane and statins.

It seems that the producers of meta-analysis have characteristics like the March of Dimes. Polio was conquered, but rather than folding up their tents and stealing away, they continue to march. That may be a good thing too, as there could be a polio resurgence if some anti-vaccine wackaloons have their way.

If there is a definitive trial, rather than declaring the question settled, the new, perhaps higher-quality, study is folded in with the prior studies and a new meta-analysis is generated. But newer studies are diluted by the older, less robust trials, so the more reliable results are lost in the wash. The best drowned in a sea of mediocrity.

For example, I see no need for a meta-analysis on the efficacy of Echinacea. The last several trials, combined with basic science/prior probability, provides sufficient evidence to conclude Echinacea does not work. Good trials win. Ha.

As a practicing specialist, no matter how much I read, I rely in part on guidelines, meta-analyses and systematic reviews as nice overviews to be used as flawed stopgaps awaiting large high quality clinical trials, that, like Godot, may never come.

I have sick patients who need treatment. I need to know what to do. I have to fight the battles with the weapons I have. I have the medical literature and I am not afraid to use it.

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (14) ↓

14 thoughts on “Rambling Musings on Using the Medical Literature

  1. daedalus2u says:

    To me, the most important trait of an expert is knowing the limits of your expertise. If you don’t know where your knowledge ends, then you are no expert even if you feel that you are. The feeling of expertise you have is an illusion.

  2. tuck says:

    Great post. A good reminder to us lay people of the difficulties of being a “practicing” physician.

    If only it was a straight-forward and easy profession.

  3. art malernee dvm says:

    …Evaluating evidence is about assessing probability,”>>>>

    If you don’t know where your knowledge ends, then you are no expert >>>>>

    What is the probability an expert is right and how can you measure that? Didn’t Dave Sackett, the so called father of evidence base medicine, suggest to doctors that once they become “experts” they should find another job. If the experts have taken his advise they are then taken out of the Expert pool of doctors that provide us with expert opinion level evidence.
    art malernee dvm
    fla lic 1820

  4. Jann Bellamy says:

    An informative essay giving the layperson insight into how medicine works — I wish the general public were required to read this sort of explanation; politicians too.

    My own “literature review,” if you will, has led to the following observations:

    Courtesy of the house doctor, medical journals are always sitting around. In looking at the tables of content, I generally see article titles such these recent examples:

    “Reprogramming Fibroblasts into Cardiomyocytes” (NEJM)
    “Genetic Variant of the Scavenger Receptor BI in Humans” (NEJM)
    The Incidence of Acetabular Osteolysis in Young patients with Conventional versus Highly Crosslinked Polyethylene” (Clinical Orthopedics and Related Research)
    “Reduction of Peritendinous Adhesions by Hydrogel Containing Biocompatible Phospholipid Polymer MPC for Tendon Repair”
    (Journal of Bone & Joint Surgery)

    I have no idea of what any of this means and that is significant, because if I could understand it that would indicate medicine is much easier than anyone imagined. My general impression is that these articles somehow attempt to move the ball forward in understanding and treating disease and injury (and maybe some other stuff I don’t understand).

    On my own, I keep up with the chiropractic literature. Some recent article titles:

    “A Report of the 2009 World Games Injury Surveillance of Individuals Who Voluntarily Used the International Federation of Sports Chiropractic Delegation”
    “Development and Preliminary Validation of the MedRisk Instrument to Measure Patient Satisfaction with Chiropractic Care”
    “Immediate Effects of the Audible Pop From a Thoracic Spine Thrust Manipulation on the Autonomic Nervous System and Pain: A Secondary Analysis of a Randomized Clinical Trial”
    (All from the most recent issue of the Journal of Manipulative & Physiological Therapeutics”)
    “Use of Chiropractic or Osteopathic Manipulation by Adults Aged 50 and Older: An Analysis of Data from the 2007 National Health Interview Survey” (Topics in Integrative Health Care)

    My impression is that chiropractic journals contain a plethora of articles about who uses chiropractic and whether they like it or not, or how to measure who uses chiropractic and whether they like it or not. Prominent also are case studies, although I didn’t list any here. Another category is”so what?” research, the “Audible Pop” article being an example. It is rare that I can’t read and understand an article in the chiropractic literature, which is significant because I have no background in science. My further impression is that the research serves little, if any, real purpose, which raises the question in my mind, “What’s the point?”

  5. ConspicuousCarl says:

    am nobody from nowhere .

    You will ALWAYS be somebody to US, Dr. Crislip!

    Newsweek, which was among the rags to swallow Cochrane’s statin slam, just ran a medical nihilism article. It is funny that they should complain about shifty medical news when they are a source of it.

  6. Geekoid says:

    “Great A’Tuin”

    Yes, it’s elephants all the way down.

    ” I wonder if other fields are plagued with these quick pseudo-experts. ”

    Yes. I am a computer programmer, and I deal with people who think the answer to a problem is copying code form someone on google that had kinda the same problem.

    Fortunately, it’s rare from someone to die from that.

    Now I’m craving a Sausage in a bun. I’ll need to find out where CMoT Dibbler is and go somewhere else.

  7. Joe says:

    I think it is past the time to declare that Cochrane Reviews no longer reflect the ideals that Archie had when setting them up. Too many people with biases, even towards arrant quackery, are on the panels.

  8. “But newer studies are diluted by the older, less robust trials, so the more reliable results are lost in the wash. The best drowned in a sea of mediocrity.”

    Amen. I see this constantly in musculoskeletal pain science. Legions of crappy little studies that do little but muddy the waters. The rare larger trial conducted with some rigour barely makes a dent in the results of the next meta-analysis, when it should rightly put the subject to rest (or very nearly).

    Another fine example of how EBM is tripping on itself.

  9. ” I wonder if other fields are plagued with these quick pseudo-experts. ”

    Sit around and listen to a bunch of web/graphic designers, illustrators, art and creative directors talk about their difficult clients. Turns out that everybody in the world is a creative expert, it’s only that they never had the time to learn anything about typography, page layout, how to visually organize information, communicate abstract concepts visually, draw, paint or use the software. But of course, they know exactly what they want. Usually that is “Make the logo bigger” and/or “Make it look like Amazon*”

    The funny thing is that good designers feel real emotional distress when they have to incorporate a clients bad idea or revision. To comfort themselves, I’ve heard more than one say “At least I’m not in medicine or a pilot, no one’s going to die from this ugly web site.”

    *or whatever the current big success is.

  10. criticalist says:

    I fully agree with the comments about meta analysis. The example that comes to my mind is the Cochrane meta analysis in 1998 that looked at giving albumin infusions to critically ill patients.

    This was a controversial topic at the time, and many doctors in the UK routinely used albumin. The meta analysis used a hodge podge of different small studies with different inclusion criteria, often poorly carried out, and found that albumin seemed to increase mortality. Ian Chalmers, head of Cochrane famously said, as a result of the paper “I would attempt to sue anyone who gave me an albumin infuision”

    Most intensivists thought this conclusion was a bit dramatic, and could not be reached from the methods used. So in 2004 they published the results of an actual proper trial of albumin in 7000 patients and showed fairly conclusively it made no difference to mortality, and in fact in some patients may be beneficial.

    As you say, garbage in, garbage out.

  11. Composer99 says:

    Speaking as a semi-pro musician, I find the average (musically) untrained layperson is far more humble about musical expertise than in science/medicine.

    Which is odd. Not that musically-untrained people are more competent in music than they are in science or medicine (although it remains enormously easier to become a competent or even an expert musician than to become a competent or expert scientist or doctor, I dare say). Only that they appear to be more cognizant of their limits.

    That doesn’t stop them from having strong opinions on what constitutes good or bad music, just that they are well aware that they, personally, are not going to produce much of either type (until the karaoke microphones appear, of course).

    I suppose, pertaining to Dr Crislip’s question about other fields and pseudo-experts, that a pseudo-expert in music is what we would call a ‘music critic’. ;)

  12. JMB says:

    Their conclusions are no more reliable than the studies they collect and are no more valid than the rest of the medical literature.

    Sometimes their conclusions are less reliable than published literature, especially when they reject large numbers of published trials due to “methodological errors”, or raise issues about unquantified risks when in fact there are reliable estimates of risks available. The Cochrane Collaboration is also prone to errors when the authors of the reviews are dealing with subjects outside of their clinical expertise (if they have any clinical expertise). It is hard to judge selection criteria and methods without clinical expertise.

  13. BillyJoe says:

    “The funny thing is that good designers feel real emotional distress when they have to incorporate a clients bad idea or revision.”

    My son is a graphic designer and recently did a design for a sign for a professional practice. A few weeks later he drove past the practice to see the results. He stopped the car and silently wept.

  14. BillyJoe says:

    I don’t quite get this rejection of meta-analyses.

    You don’t reject clinical trials do you? You just reject the ones with fatal methodological flaws. So why do you reject meta-analyses? Why don’t you just reject the ones that include clinical trials with fatal methodological flaws?

Comments are closed.