Articles

Journal Club

There is a tradition in medical training called Journal Club. The first rule of Journal Club is you do not talk about Journal Club. In Journal Club, at least in the iterations in which I have participated, one article is selected by an attending, everyone reads it, then the strengths, weakness and applicability are discussed by the group. Usually a top notch, ground breaking article was the focus, one that had high potential clinical impact. But since they were good articles in good journals, there was not a lot to learn about in reguards to critical thinking. While the attending would put the article in context and maybe discuss some rudimentary statistics, there was little that was discussed about the quality of the study. The main take home from every study was to question the applicability of the results to populations that were not old, white males, since it seemed all the ground breaking studies back in the day were a VA Cooperative study of one sort or another.

As I remember it, there was not really a conceptual frame work with which to evaluate studies. Bayes theorem, and its application to clinical medicine was never explicitly discussed outside of testing, where you have to consider the prior plausibility of the patient having a disease before you can decide if the test results is a true positive or not. In Portland, Oregon, the chance that a Lyme serology is a false positive is much greater than a test done in Portland, Maine. Generally speaking in the information overload state that is the practice of medicine, clinical trials are generally taken at face value and tests are considered infallible. Which is a shame, as I wonder how much suboptimal medicine is inflicted on patients by not considering prior plausibility and how accurate a given test is in either ruling in or out a disease. There seems to be a whole industry built around treating patients with no risks for Lyme but have positive tests of doubtful provenance. We never discussed the prior plausibility and its effect on the outcomes of a studied treatment.

I keep coming back to Why Most Published Research Findings Are False by John P. A. Ioannidis as the archetype framework by which to evaluate the truthiness of studies. A problem with applying the Ioannidis paper is that for a lot of big ticket items in medicine the preponderance of data suggests the approximate optimal way to treat common diseases. Brouhaha to follow in the comments as I am sure I will be flooded with counter examples. I speak as a hospital based ID doctor, and most of the time for patients admitted to the hospital we have a rough idea as to what needs to be done diagnostically and therapeutically based on the likely causes of the patients symptoms. There are always fine points deriving from the individual patients comorbidities. At some level, for example, every community acquired pneumonia is the same, requiring a beta-lactam and a macrolide as initial empiric therapy, and every community acquire pneumonia is different, depending on allergies, exposure risks etc. Humans tend to function in relatively narrow operational parameters, although with nearly infinite combinations of those parameters. New papers often have a vast background of similar studies that places any new work in context. I suspect the framework of the Ionanidis paper has more applicability to the new and unproven, to the cutting edge of research.

Reading the medical literature critically as a resident or fellow, there is little need to think about all the ways the literature could be wrong. The assumption is that the good studies in the good journals are mostly right and testing is mostly accurate. It wasn’t until a decade after my training did I start to think critically, or even need to think critically, about the medical literature, and then only as part of my interest in SCAM’s. As a specialist an understanding of the is and outs of the literature comes as part of acquiring the breadth and depth of knowledge of in the areas of my expertise (which is more information than you require). The limitations of a given study are always discussed in the context of the entire literature on a topic. Often it is not that a paper is binary true or false, but often, given the qualifiers of the limitations of a given study, there is a continuum of truthiness for most of the literature, even in a disease as common as pneumonia. I can talk for an hour (a non-addicting substitute for Ambien) on the issues concerning the clinical trials that resulted in the current guidelines for the treatment of pneumonia. Medicine is messy and complex, filled with qualifiers. As to the rest of medicine? I don’t pay that close of attention. I have just have so many neurons I can devote to medicine, so for areas outside of my expertise, I defer to the experts, which is what most of us have to do in a busy day.

Far more can be learned about critical thinking if Journal Club were devoted not to the best of the best, but to the best of the worst, and there is no area of medicine with worse clinical trials that SCAM. One such crossed the LCD this month as I prepared for my Puscast, by way of Medscape*: Meditation, Exercise May Decrease Cold Symptoms said the headline. The authors modestly refer too their study as a “ground-breaking randomized trial of meditation and exercise vs wait-list control among adults aged 50 years and older found significant reductions in ARI illness.”

I love the way ground-breaking trails off into qualifiers. But ground-breaking. This requires more than a skim of the interweb summary, it requires going though the original. I canna pass up ground-breaking, now can I?

There is always the unreliable gut check to start. I read the title and think, ‘that can’t be true’ or ‘that’s interesting’, and then read the paper and, ‘meh, the gut check was wrong again’ or ‘cool, I’ll try to remember this.’

I tend to have that horrible, western, reductionist metaphor when thinking about human physiology and pathophysiology: we are meat machines. Americans are often poorly maintained meat machines with suboptimal diets and insufficient exercise. Although my gut reaction was regular exercise should decrease the riskfor infectious diseases, I did not know the data. The same week the meditation article was released, in my literature search I found Physical Activity and Influenza-Coded Outpatient Visits, a Population-Based Cohort Study , which suggested

Moderate to high amounts of physical activity may be associated with reduced risk of influenza for individuals < 65 years.” It was not a comparative clinical trial but a data mining evaluation comparing “physical activity levels through survey responses, and influenza- coded physician office and emergency department visits through physician billing claims. We used logistic regression to estimate the risk of influenza-coded outpatient visits during influenza seasons. The cohort comprised 114,364 survey respondents who contributed 357,466 person-influenza seasons of observation. Compared to inactive individuals, moderately active (OR 0.83; 95% CI 0.74–0.94) and active (OR 0.87; 95% CI 0.77–0.98) individuals were less likely to experience an influenza-coded visit. Stratifying by age, the protective effect of physical activity remained significant for individuals ,65 years (active OR 0.86; 95% CI 0.75–0.98, moderately active: OR 0.85; 95% CI 0.74–0.97) but not for individuals $65 years.

A search of the Pubmeds reveals a smattering of studies that demonstrate both no exercise and excessive exercise increase the risk of upper respiratory infections with moderate exercise being in the Goldilocks zone for benefit. The data also shows that immune function, however they chose to measure it, is better with moderate exercise. The meat machine runs better when active. In epidemiological studies there is always the chance that the perceived cause of benefit, in this case exercise, is only a marker for other reasons for the effect: those who exercise have other factors that decrease risk. Health and disease are never as simple as they appear at first glance. But I have few doubts concerning the multitudinous benefits of regular, moderate, exercise.

My first reaction to meditation was bah, humbug. The theory: stress makes one susceptible to infection, meditation can decrease stress, therefore meditation, by receiving stress, will decrease infection risk. The authors say

There is some evidence that enhancing general physical and mental health may reduce ARI burden.
In a series of observational and viral inoculation studies, perceived stress, negative emotion, and lack of social support predicted not only self-reported illness, but also such biomarkers as viral shedding and inflammatory cytokine activity. Evidence suggests that mindfulness meditation can reduce experienced stress and negative emotions.

A bit of a stretch perhaps, but interesting if it pans out. Stress is always a tricky one in the practice of medicine. Just as every patient seems to perceive they are uniquely susceptible to bad luck, in my practice it is an unusual infection: “if something bad is going to happen, it is always me.” I have never had a patient say, “that’s odd Doc, I am always so lucky, it is weird how this bit of misfortune affected me”, patients often perceive themselves under inordinate stress. Still, the data does suggest that stress and personality type may increase the risk of infection, so it is plausible if one could decrease stress with meditation, you could decrease susceptibility to infection.

I am inclined to think the premise behind the study is reasonable based on prior research, and it would be a trial that if well done would be further evidence, as if people follow evidence, of the benefits of exercise. In their trial not only was exercise of benefit in preventing acute respiratory illness (ARI), but mindfulness meditation was even better than exercise at preventing acute respiratory problems.

We observed substantive reductions in ARI illness among those randomized to exercise training, and even greater benefits among those receiving mindfulness meditation training.

Unfortunately the trial has perhaps every known flaw one can make in a clinical study and it renders the results useless, as much as I would like them to be true.

The lead author, Dr. Bruce Barret, has been supported by NCAAM in the past and I suspect may have a different approach to applying clinical trials to medicine than I do. His response to a negative trial of ehinacea for the treatment of colds study was

Adults who have found echinacea to be beneficial should not discontinue use based on the results of this trial, as there are no proven effective treatments and no side effects were seen.

The antithesis of an EBM/SBM approach to medicine, but I do not know if it is representative of Dr Barret’s general approach to medicine. I would have said based on the data, echinacea is crap, doesn’t work, has no reason to work, so quit using it and don’t waste your money. I mention this only because bias of all kinds can color the approach to a trial and its interpretation and this trial is open to huge amounts of inadvertent bias.

Always the most difficult issue in a study: bias.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u. Conflicts of interest are very common in biomedical research, and typically they are inadequately and sparsely reported. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations.

It is why double blinding is so important in clinical trials, as the endless ability of patients and researchers to fool themselves and each other as to benefit is enormous. Without careful double blinding patient and researcher are no more than Clever Hans.

Problems with the trial: small numbers of patients. While they report outcomes on 149 patients, which is almost respectable, the protocol was actually run twice, once in the fall with 91 patients and once in the spring with 58 patients, and the combined data reported. Different viral seasons, but it appears to be two trials reported as one and data is not reported from each individual study. It is more a meta-analysis of two small, flawed studies rather than one larger flawed study.

There are multiple comparisons for both primary and secondary outcomes. When there are small numbers of patients and multiple comparisons, more often than not anything ‘significant’ is more likely due to random scatter than real effect.

But the fatal flaw was the lack of blinding. Of course the patients and the researchers knew who was receiving what intervention. It would be difficult to invent placebo exercise or meditation. Patients were called twice a week and if they reported ARI symptoms they received a laboratory evaluation within three days of onset. Not only were patients aware of their assignment, but the study relied on self-reporting to determine if they were starting an ARI. If would have been more impressive if every patient had a laboratory evaluation and nasal swab for pathogens twice a week, regardless of symptoms.

One clear result of the NEJM article was that patients who receive a placebo intervention perceive themselves as better even when they are not. So relying on the patients perception of becoming ill while in the meditation or exercise wing of the group is instantly suspect. Given the potential for poisoning the well, the lack of blinding renders the results useless. Given that the perceived effect of acupuncture depends mostly on the patients belief that acupuncture will have an effect, one wonders how much expectation lead to the improved results in the meditation group. I wonder what results would occur is patients enrolled in NCAAM funded SCAM studies were members of the JREF or CSI. So much opportunity for the clinical trial equivalent of the Stockholm effect, trying to please your researcher.

It was a preliminary study, so flawed as to hardly be ground-breaking, more in the maybe interesting if it were actually done in a way where the data was meaningful. It is the kind of study I would like to be validated. If you use the title as a Google search term, it appears the article is being used more to justify the meditation aspects than the exercise and as a validation of alternative and complementary medicine in general.

At the end of the day this article at best elicits a meh, so filled with flaws as to almost be a waste of the ink and paper it was printed on. But that is the way of clinical research. Really crappy preliminary trials whose results are either flat out wrong or markedly overstated will, one hopes, be superseded by better designed trials where the decline effect will kick in. Better trials, I predict, well demonstrate the benefits of exercise for decreasing the odds of infection and the dramatic benefits of mindfulness meditation will drift towards the insignificant. And the results of this particularly flawed study will persist longer than any subsequent trial that suggests otherwise: “Adults who have found meditation to be beneficial to prevent colds should not discontinue use based on the results of this trial, as there are no proven effective treatments and no side effects were seen.”

The JREF will owe me a cool million.

* remember, I am a paid Medscape blogger and writer.

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (9) ↓

9 thoughts on “Journal Club

  1. windriven says:

    So how does dreck like this get published in Annals of Family Medicine? One presumes that practicing physicians will read this and some portion of them will make similar assumptions of plausibility to those Dr. Crislip mentioned and some portion of them will send some portion of their patients to remedial meditation camp for the summer. Did Mercola or Oz do the peer review?

  2. Angora Rabbit says:

    One of the major flaws in peer-review is that editors are no longer widely read. Thus they don’t know a particular field well (apart from their own) and therefore rely on either their own tame stable of reviewers (who are likely outside the field as well) or reviewers that the authors have suggested.* When the reviews come back, the editors do not have the background or time to critically evaluate the reviews, and basically give a pass to whatever the reviewers said.

    In turn, reviewers are given just two weeks to review a paper and journals apply pressure for a fast turn around.** Maybe I am slow, but after I read a paper, I set it aside and chew on what I’ve read. Only then do I reread, and then write my comments. My reviews are always late :) but good editors appreciate the feedback.

    *A good author will suggest arm’s length reviewers, but the reality is that we must publish or perish, so the preference is to suggest reviewers who know the research backstory and are likely to be more sympathetic.

    **I find this ironic since my most recent paper was accepted in Nov and has yet to appear in print.

  3. Harriet Hall says:

    “Far more can be learned about critical thinking if Journal Club were devoted not to the best of the best, but to the best of the worst,”

    I would go even further and say we can learn a great deal about critical evaluation of studies by studying the worst of the worst. The flaws are easiest to spot there and can serve as excellent, vivid, memorable learning examples.

    Your experience echoes my own. I learned not in medical school but long after, by reading skeptical literature and CAM studies.

  4. Angora Rabbit says:

    I so agree with you both. Last spring for our grad seminar, the students gave presentations on nutritional supplements. They selected an internet claim and then presented a peer-reviewed paper that “endorsed” that claim. It was a real eye-opener for them, and we spent a significant portion of class time discussing not the papers, but how such trash could get published. It certainly made them feel better about the quality of their own work. It was a sufficiently popular exercise that we may do it again next year.

    If I could add more, one of the best journal club experiences I ever had was when we distributed just the figures and tables, not the paper itself (30yrs ago when photocopies were expensive, which dates me). This forced us to reach our own conclusions and then compare those against the authors. What an interesting experience! I occasionally do this now with my students and lab and they are always amazed at the difference.

  5. “I would go even further and say we can learn a great deal about critical evaluation of studies by studying the worst of the worst. The flaws are easiest to spot there and can serve as excellent, vivid, memorable learning examples.”

    An interesting followup to the TAM 2012 Dr Google workshop would be a workshop on the basics of skeptical reading of literature, specifically studies, abstracts, and articles reporting on studies.

    A good starting point would be Prometheus’ two part post “Anatomy of a Study: A Dissection Guide”

    http://photoninthedarkness.com/?p=228
    http://photoninthedarkness.com/?p=230

  6. Janet Camp says:

    I’m going to post this anecdote to Dr. Crislip’s post because I just left Portland and I didn’t get to meet him so this is the next best thing I can think of to console myself.

    I was in Portland mostly for a wedding. Whilst sitting at a table at a pre-wedding cocktail party, having a pleasant conversation with a man quite a few years my junior who actually didn’t seem to mind talking to a grandmother, I was feeling rather accomplished at my social skills. Suddenly, the younger man reached into his pocket and pulled out a huge handful of supplements, laid them on the table, and said, “excuse me, I have to take my supplements”! I can only conclude that he does this more than once per day. I, ever the skeptic, replied that he should certainly feel free to have very expensive pee.

    Well, it was fun while it lasted. WHY does Scam follow me everywhere I go?

    Just to be a little bit on topic–I’m sure this fellow has read some studies that “prove” he should be taking all these supplements.

  7. Vera Montanum says:

    Angora Rabbit said, “… one of the best journal club experiences I ever had was when we distributed just the figures and tables…”

    It frequently happens, but I always find it disconcerting when either (a) data in figures/tables do not coincide with what’s in the text, or (b) the data (in tables and text) tell a different sort of story than what the authors claim. I always assume that it’s me who is “missing something” and have spent hours double-checking numbers and trying to figure out how on Earth the authors came to their conclusions. We should have either fewer journals or better editors and reviewers — there is no acceptable excuse for this sort of travesty, in my opinion.

    BTW… most of my experience with these problems has been in the mental health and pain management fields, but I’m sure the problems are similar elsewhere.

  8. jt512 says:

    @V. Montanum: Your post would be so much more meaningful if you supported your claim with a couple of examples with citations. As it is, it’s just a “Yeah, me too” post, and doesn’t help to elucidate the problem at all.

    Jay

  9. fledarmus1 says:

    On the other hand, the paper described might serve as a nice control for a study on negative biases. Put together another cohort of patients for a study on the effects of moderate exercise and meditation on, say, weight loss or attention span, explain to the patients that the investigators worry about the possibility that exercise and meditation might make them more susceptible to colds and that they will need to monitor themselves for potential cold symptoms, then carry out the same study.

Comments are closed.