A long time ago I read a study about what makes a good doctor. Some things you might think were important, like grades in medical school, were irrelevant. What correlated the best was the number of medical journals a doctor read. I don’t know whether that means good doctors read more journals or reading more journals makes a better doctor.
One thing I do know is that most of us could learn better journal-reading skills. When I was a busy clinician, I did what I suspect many busy clinicians do: I let the journals pile up for a while, then tackled a stack when I got motivated. I would skim the table of contents to pick out articles that I wanted to read, then I would read the abstracts of those articles. If the abstract interested me, I would read the discussion section of the article. If I was still interested, I might go back and read the entire article. But until after I retired, I never really developed the skills to evaluate the quality of the study.
I knew enough not to jump on the bandwagon the first time something was reported, because I had seen promising treatments bite the dust with further testing. But I really wasn’t aware of all the things that can go wrong in a study, and I didn’t know what to look for to decide if the results were really credible. I’m not an academic; I thought the authors knew a lot more than I did, and I trusted them to a degree that was not warranted.
Eventually I developed some critical thinking skills. I’m still learning. I think what taught me the most was reading appallingly bad studies (thanks to supplement manufacturers, chiropractors, energy medicine proponents, and others!). Once I was aware of bad practices, I knew to look for signs of them even in relatively good studies.
I learned a lot from an excellent book, Critical Thinking About Research by Julian Meltzoff. While directed at psychology, its lessons pertain to research in any field. It has chapters covering all the main aspects of research, like sample selection and controlling for confounding variables, but the best part of the book is a series of 16 practice articles. These are made-up studies with flaws deliberately implanted. You get a chance to look for the flaws, then to check your answers against Meltzoff’s comments. An extra added attraction is the puns in the names of the studies’ authors. For instance, the authors of a study on the social effects of tax deadlines are “Levy” and “Hertz.” And Meltzoff even explains the puns for those who don’t get them.
I also learned a lot from hearing other doctors critique studies. Now there is another great opportunity to benefit from that kind of experience. The American Academy of Family Physicians (AAFP) has initiated a monthly “Journal Club” series in its flagship journal American Family Physician.
Each month, three presenters will review an interesting journal article in a conversational manner. These articles will involve “hot topics” that affect family physicians or will “bust” commonly held medical myths. The presenters will give their opinions about the clinical value of the studies discussed. The opinions reflect the views of the presenters, not those of AFP or the AAFP.
In the April installment, they asked “Does the widespread use of the thrombolytic tissue plasminogen activator (t-PA) produce more benefit or harm in patients who experience an acute stroke?” and discussed a recent review article.
There are 2 kinds of stroke, ischemic and hemorrhagic. Either a part of the brain is deprived of blood (usually from blockage by a clot), or there is bleeding into a part of the brain. If there is a clot, t-PA can be administered to dissolve the clot. The public has learned it is important to rush to the hospital when stroke symptoms begin, because there is only a 3 hour window for using t-PA for an ischemic stroke. Unfortunately, t-PA can cause bleeding complications and might even precipitate the other kind of stroke. The review article showed that among 248,964 patients with ischemic stroke, 1% of them received t-PA. Those who received it had a mortality of 11.4%; those who didn’t had a mortality of 6.8%. What? Yes, patients were more likely to die with t-PA treatment than without.
The discussers ask whether perhaps patients with more severe strokes were likely to get t-PA, whether there was good compliance with the intricate protocol, or whether t-PA was administered to patients who had a “stroke mimic.”
There has only been one good randomized trial.
At three months, 50 percent of patients who received t-PA had minimal or no disability compared with 38 percent who received placebo. This 12 percent difference translates into an NNT [number needed to treat] of eight. [You have to treat 8 patients for 1 to benefit,] In the NINDS trial, there was no increase in mortality rates, but the rate of intracerebral hemorrhage was 6.4 percent in patients receiving t-PA and 0.6 percent in patients receiving placebo (NNH [number needed to harm] = 17).
The bottom line is that one in eight patients is helped at three months, one in 17 is harmed, and although the randomized trial showed no increase in mortality, there has been a documented increase in death rates in patients who have received t-PA therapy outside of research trials. One of the discussers said, “When I am asked what I would do if it was my own family member, I answer honestly: I would not give this therapy.”
Among their main teaching points:
(1) The use of t-PA for acute ischemic stroke is a double-edged sword – both benefit and deleterious effects are noted.
(2) Informed consent, in language that the patient and his or her family can understand, is absolutely necessary when contemplating the use of t-PA for acute ischemic stroke.
(3) The demonstrated efficacy of a drug or intervention in a clinical trial may not translate to effectiveness in the community.
(4) NNT and NNH are powerful tools in documenting an intervention’s effect.
The distinction between efficacy and effectiveness is particularly important to understand. I applaud the AAFP for publishing this series. We need much more of this kind of thing.