Journal Club

There is a tradition in medical training called Journal Club. The first rule of Journal Club is you do not talk about Journal Club. In Journal Club, at least in the iterations in which I have participated, one article is selected by an attending, everyone reads it, then the strengths, weakness and applicability are discussed by the group. Usually a top notch, ground breaking article was the focus, one that had high potential clinical impact. But since they were good articles in good journals, there was not a lot to learn about in reguards to critical thinking. While the attending would put the article in context and maybe discuss some rudimentary statistics, there was little that was discussed about the quality of the study. The main take home from every study was to question the applicability of the results to populations that were not old, white males, since it seemed all the ground breaking studies back in the day were a VA Cooperative study of one sort or another.

As I remember it, there was not really a conceptual frame work with which to evaluate studies. Bayes theorem, and its application to clinical medicine was never explicitly discussed outside of testing, where you have to consider the prior plausibility of the patient having a disease before you can decide if the test results is a true positive or not. In Portland, Oregon, the chance that a Lyme serology is a false positive is much greater than a test done in Portland, Maine. Generally speaking in the information overload state that is the practice of medicine, clinical trials are generally taken at face value and tests are considered infallible. Which is a shame, as I wonder how much suboptimal medicine is inflicted on patients by not considering prior plausibility and how accurate a given test is in either ruling in or out a disease. There seems to be a whole industry built around treating patients with no risks for Lyme but have positive tests of doubtful provenance. We never discussed the prior plausibility and its effect on the outcomes of a studied treatment.

I keep coming back to Why Most Published Research Findings Are False by John P. A. Ioannidis as the archetype framework by which to evaluate the truthiness of studies. A problem with applying the Ioannidis paper is that for a lot of big ticket items in medicine the preponderance of data suggests the approximate optimal way to treat common diseases. Brouhaha to follow in the comments as I am sure I will be flooded with counter examples. I speak as a hospital based ID doctor, and most of the time for patients admitted to the hospital we have a rough idea as to what needs to be done diagnostically and therapeutically based on the likely causes of the patients symptoms. There are always fine points deriving from the individual patients comorbidities. At some level, for example, every community acquired pneumonia is the same, requiring a beta-lactam and a macrolide as initial empiric therapy, and every community acquire pneumonia is different, depending on allergies, exposure risks etc. Humans tend to function in relatively narrow operational parameters, although with nearly infinite combinations of those parameters. New papers often have a vast background of similar studies that places any new work in context. I suspect the framework of the Ionanidis paper has more applicability to the new and unproven, to the cutting edge of research.

Reading the medical literature critically as a resident or fellow, there is little need to think about all the ways the literature could be wrong. The assumption is that the good studies in the good journals are mostly right and testing is mostly accurate. It wasn’t until a decade after my training did I start to think critically, or even need to think critically, about the medical literature, and then only as part of my interest in SCAM’s. As a specialist an understanding of the is and outs of the literature comes as part of acquiring the breadth and depth of knowledge of in the areas of my expertise (which is more information than you require). The limitations of a given study are always discussed in the context of the entire literature on a topic. Often it is not that a paper is binary true or false, but often, given the qualifiers of the limitations of a given study, there is a continuum of truthiness for most of the literature, even in a disease as common as pneumonia. I can talk for an hour (a non-addicting substitute for Ambien) on the issues concerning the clinical trials that resulted in the current guidelines for the treatment of pneumonia. Medicine is messy and complex, filled with qualifiers. As to the rest of medicine? I don’t pay that close of attention. I have just have so many neurons I can devote to medicine, so for areas outside of my expertise, I defer to the experts, which is what most of us have to do in a busy day.

Far more can be learned about critical thinking if Journal Club were devoted not to the best of the best, but to the best of the worst, and there is no area of medicine with worse clinical trials that SCAM. One such crossed the LCD this month as I prepared for my Puscast, by way of Medscape*: Meditation, Exercise May Decrease Cold Symptoms said the headline. The authors modestly refer too their study as a “ground-breaking randomized trial of meditation and exercise vs wait-list control among adults aged 50 years and older found significant reductions in ARI illness.”

I love the way ground-breaking trails off into qualifiers. But ground-breaking. This requires more than a skim of the interweb summary, it requires going though the original. I canna pass up ground-breaking, now can I?

There is always the unreliable gut check to start. I read the title and think, ‘that can’t be true’ or ‘that’s interesting’, and then read the paper and, ‘meh, the gut check was wrong again’ or ‘cool, I’ll try to remember this.’

I tend to have that horrible, western, reductionist metaphor when thinking about human physiology and pathophysiology: we are meat machines. Americans are often poorly maintained meat machines with suboptimal diets and insufficient exercise. Although my gut reaction was regular exercise should decrease the riskfor infectious diseases, I did not know the data. The same week the meditation article was released, in my literature search I found Physical Activity and Influenza-Coded Outpatient Visits, a Population-Based Cohort Study , which suggested

Moderate to high amounts of physical activity may be associated with reduced risk of influenza for individuals < 65 years.” It was not a comparative clinical trial but a data mining evaluation comparing “physical activity levels through survey responses, and influenza- coded physician office and emergency department visits through physician billing claims. We used logistic regression to estimate the risk of influenza-coded outpatient visits during influenza seasons. The cohort comprised 114,364 survey respondents who contributed 357,466 person-influenza seasons of observation. Compared to inactive individuals, moderately active (OR 0.83; 95% CI 0.74–0.94) and active (OR 0.87; 95% CI 0.77–0.98) individuals were less likely to experience an influenza-coded visit. Stratifying by age, the protective effect of physical activity remained significant for individuals ,65 years (active OR 0.86; 95% CI 0.75–0.98, moderately active: OR 0.85; 95% CI 0.74–0.97) but not for individuals $65 years.

A search of the Pubmeds reveals a smattering of studies that demonstrate both no exercise and excessive exercise increase the risk of upper respiratory infections with moderate exercise being in the Goldilocks zone for benefit. The data also shows that immune function, however they chose to measure it, is better with moderate exercise. The meat machine runs better when active. In epidemiological studies there is always the chance that the perceived cause of benefit, in this case exercise, is only a marker for other reasons for the effect: those who exercise have other factors that decrease risk. Health and disease are never as simple as they appear at first glance. But I have few doubts concerning the multitudinous benefits of regular, moderate, exercise.

My first reaction to meditation was bah, humbug. The theory: stress makes one susceptible to infection, meditation can decrease stress, therefore meditation, by receiving stress, will decrease infection risk. The authors say

There is some evidence that enhancing general physical and mental health may reduce ARI burden.
In a series of observational and viral inoculation studies, perceived stress, negative emotion, and lack of social support predicted not only self-reported illness, but also such biomarkers as viral shedding and inflammatory cytokine activity. Evidence suggests that mindfulness meditation can reduce experienced stress and negative emotions.

A bit of a stretch perhaps, but interesting if it pans out. Stress is always a tricky one in the practice of medicine. Just as every patient seems to perceive they are uniquely susceptible to bad luck, in my practice it is an unusual infection: “if something bad is going to happen, it is always me.” I have never had a patient say, “that’s odd Doc, I am always so lucky, it is weird how this bit of misfortune affected me”, patients often perceive themselves under inordinate stress. Still, the data does suggest that stress and personality type may increase the risk of infection, so it is plausible if one could decrease stress with meditation, you could decrease susceptibility to infection.

I am inclined to think the premise behind the study is reasonable based on prior research, and it would be a trial that if well done would be further evidence, as if people follow evidence, of the benefits of exercise. In their trial not only was exercise of benefit in preventing acute respiratory illness (ARI), but mindfulness meditation was even better than exercise at preventing acute respiratory problems.

We observed substantive reductions in ARI illness among those randomized to exercise training, and even greater benefits among those receiving mindfulness meditation training.

Unfortunately the trial has perhaps every known flaw one can make in a clinical study and it renders the results useless, as much as I would like them to be true.

The lead author, Dr. Bruce Barret, has been supported by NCAAM in the past and I suspect may have a different approach to applying clinical trials to medicine than I do. His response to a negative trial of ehinacea for the treatment of colds study was

Adults who have found echinacea to be beneficial should not discontinue use based on the results of this trial, as there are no proven effective treatments and no side effects were seen.

The antithesis of an EBM/SBM approach to medicine, but I do not know if it is representative of Dr Barret’s general approach to medicine. I would have said based on the data, echinacea is crap, doesn’t work, has no reason to work, so quit using it and don’t waste your money. I mention this only because bias of all kinds can color the approach to a trial and its interpretation and this trial is open to huge amounts of inadvertent bias.

Always the most difficult issue in a study: bias.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u. Conflicts of interest are very common in biomedical research, and typically they are inadequately and sparsely reported. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations.

It is why double blinding is so important in clinical trials, as the endless ability of patients and researchers to fool themselves and each other as to benefit is enormous. Without careful double blinding patient and researcher are no more than Clever Hans.

Problems with the trial: small numbers of patients. While they report outcomes on 149 patients, which is almost respectable, the protocol was actually run twice, once in the fall with 91 patients and once in the spring with 58 patients, and the combined data reported. Different viral seasons, but it appears to be two trials reported as one and data is not reported from each individual study. It is more a meta-analysis of two small, flawed studies rather than one larger flawed study.

There are multiple comparisons for both primary and secondary outcomes. When there are small numbers of patients and multiple comparisons, more often than not anything ‘significant’ is more likely due to random scatter than real effect.

But the fatal flaw was the lack of blinding. Of course the patients and the researchers knew who was receiving what intervention. It would be difficult to invent placebo exercise or meditation. Patients were called twice a week and if they reported ARI symptoms they received a laboratory evaluation within three days of onset. Not only were patients aware of their assignment, but the study relied on self-reporting to determine if they were starting an ARI. If would have been more impressive if every patient had a laboratory evaluation and nasal swab for pathogens twice a week, regardless of symptoms.

One clear result of the NEJM article was that patients who receive a placebo intervention perceive themselves as better even when they are not. So relying on the patients perception of becoming ill while in the meditation or exercise wing of the group is instantly suspect. Given the potential for poisoning the well, the lack of blinding renders the results useless. Given that the perceived effect of acupuncture depends mostly on the patients belief that acupuncture will have an effect, one wonders how much expectation lead to the improved results in the meditation group. I wonder what results would occur is patients enrolled in NCAAM funded SCAM studies were members of the JREF or CSI. So much opportunity for the clinical trial equivalent of the Stockholm effect, trying to please your researcher.

It was a preliminary study, so flawed as to hardly be ground-breaking, more in the maybe interesting if it were actually done in a way where the data was meaningful. It is the kind of study I would like to be validated. If you use the title as a Google search term, it appears the article is being used more to justify the meditation aspects than the exercise and as a validation of alternative and complementary medicine in general.

At the end of the day this article at best elicits a meh, so filled with flaws as to almost be a waste of the ink and paper it was printed on. But that is the way of clinical research. Really crappy preliminary trials whose results are either flat out wrong or markedly overstated will, one hopes, be superseded by better designed trials where the decline effect will kick in. Better trials, I predict, well demonstrate the benefits of exercise for decreasing the odds of infection and the dramatic benefits of mindfulness meditation will drift towards the insignificant. And the results of this particularly flawed study will persist longer than any subsequent trial that suggests otherwise: “Adults who have found meditation to be beneficial to prevent colds should not discontinue use based on the results of this trial, as there are no proven effective treatments and no side effects were seen.”

The JREF will owe me a cool million.

* remember, I am a paid Medscape blogger and writer.

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (9) ↓