It’s that time of year when every day I can expect to see at least one patient with a concern about Lyme disease. In Lyme-endemic regions such as Western Massachusetts, where I practice pediatrics, summer brings a steady stream of children to my office with either the classic Lyme rash (erythema chronicum migrans, or ECM), an embedded tick, a history of a tick bite, or non-specific signs or symptoms that may or may not be due to Lyme disease. Sometimes the diagnosis is relatively straightforward. A child is brought in after a parent has pulled off an engorged deer tick, and there is a classic, enlarging ECM rash at the site of the bite. More often the presentation is less clear, requiring detective work and science-based reasoning to make an informed decision and a diagnostic and therapeutic plan based on the best available evidence. Depending on the story, the plan may include immediate treatment without any testing (as in the straightforward case described above), immediate testing without treatment pending test results, or waiting as we watch and see how a rash progresses before doing anything. An example of this latter course of action would be when a patient comes in with a pink swelling at the site of a new tick bite. In this case, it may not be clear if the swelling is a Lyme rash or simply a local reaction to the bite, a much more common occurrence. The classic ECM rash (an enlarging, red, circular, bull’s-eye rash at or near a tick bite) typically develops 1-2 weeks after a tick bite, but can occur anywhere from 3-30 days later. It then expands and darkens over another 1-3 weeks before fading. This classic rash is not the most common rash of Lyme disease, however, as it occurs in only about 30% of cases. Instead, the rash may be uniformly pink or red (or even darker in the center) without the target-like appearance, or may be a linear rash, expanding outward from the tick bite site. In the case of a patient who comes in with a vague, pink swelling within a day few days of a tick bite, we will typically wait and see what happens to the rash. If it is a local reaction, it will likely resolve within another few days. With Lyme disease, the rash will continue to enlarge and declare itself as an ECM rash. Another unclear and not uncommon situation is when a patient comes in with non-specific symptoms such as fatigue, musculoskeletal pains, and headache. If warranted by the history and the physical exam, we may in this case order Lyme testing. This may not give us an answer even if the patient has Lyme disease, because results are often negative in the first few weeks of the disease. In this case, if symptoms persist or evolve, we will repeat the testing in another few weeks at which point true Lyme disease will test positive and can then be treated. The good news is that the treatment of Lyme disease, particularly in the early, localized phase of the disease, is extremely safe and effective with a 14-day course of antibiotics. The testing is also relatively straightforward, with very good sensitivity and specificity when performed correctly. And this is where the bad news comes… (more…)
Archive for Diagnostic tests & procedures
Last week I wrote about doctors who order unnecessary tests, and the excuses they give. Then I ran across an example that positively flabbered my gaster. A friend’s 21-year-old son went to a board-certified family physician for a routine physical. This young man is healthy, has no complaints, has no past history of any significant health problems and no family history of any disease. The patient just asked for a routine physical and did not request any tests; the doctor ordered labwork without saying what tests he was ordering, and the patient assumed that it was a routine part of the physical exam. The patient’s insurance paid only $13.09 and informed him that he was responsible for the remaining $3,682.98 (no, that’s not a typo). I have a copy of the Explanation of Benefits: the list of charges ranged from $7.54 to $392 but did not specify which charges were for which test. It listed some of the tests as experimental and not covered at all by the insurance policy, and one test was rejected because there was no prior authorization. (more…)
While cleaning out some old files, I was delighted to find an article I had clipped and saved 35 years ago: a “Sounding Boards” article from the January 25, 1979 issue of The New England Journal of Medicine. It was written by Joseph E. Hardison, MD, from the Emory University School of Medicine; it addresses the reasons doctors order unnecessary tests, and its title is “To Be Complete.” Today we have many more tests that can be ordered inappropriately and the article is even more pertinent and deserves to be re-cycled. He says,
When challenged and asked to defend their reasons for ordering or performing unnecessary tests and procedures, the reasons given usually fall under one of the following excuses…
Introduction: An unexpected e-mail arrives
One of the consequences of the growing traffic and prominence of this blog over the last few years is that people who would otherwise have probably ignored what I or my partners in blogging write now sometimes actually take notice. Nearly a decade ago, long before I joined this blog as a founding blogger, if I wrote a post criticizing something that a prominent academic said, it was highly unlikely that that person would even become aware of it, much less bother to respond to whatever my criticism was. I was, quite simply, beneath their notice, sometimes happily, sometimes unhappily.
It appears that those days might be over. Last week Dr. Daniel Kopans, a prominent Harvard radiologist and well-known long-time defender of screening mammography, sent me a rather unhappy e-mail complaining about my “attack” on him on this blog, a charge that he repeated in a subsequent e-mail. Before I publish his initial e-mail verbatim (with his permission), I would like to point out that, while it’s true that I did criticize some of Dr. Kopans’ statements rather harshly in my post about the Canadian National Breast Screening Study (CNBSS), even characterizing one statement as a “howler,” I would hardly characterize what I wrote as an “attack.” That to me tends to imply a personal attack. Using Dr. Kopans’ apparent definition, what he has said and written about investigators like those running the CNBSS, as documented in my post, about H. Gilbert Welch, who published a large study in 2012 estimating the extent of overdiagnosis due to mammography, and the U.S. Preventive Services Task Force (USPSTF), the group that in 2009 suggested changing guidelines for routine screening mammography in asymptomatic women to begin at age 50 instead of age 40, would appear to also qualify as “attacks.”
Be that as it may, I also wondered why Dr. Kopans hadn’t noticed my CNBSS post until more than three months after it had originally appeared. Then, the day after I received Dr. Kopans’ e-mail, my Google Alert on mammography popped up an article in the Wall Street Journal by Dr. Kopans entitled “Mammograms Save Lives: Criticism of breast-cancer screenings is more about rationing than rationality.” That’s when I guessed that someone probably had either posted or e-mailed Dr. Kopans a link to my previous post in response to that article. Given the confluence of events, I think it’s a perfect time to discuss both Dr. Kopans’ e-mail and his article, because they cover many of the same issues. (more…)
One size rarely fits all. Most medical knowledge is derived from studying groups of subjects, subjects who may be different in some way from the individual who walks into the doctor’s office. Basing medicine only on randomized controlled studies can lead to over-simplified “cookbook” medicine. A good clinician interprets study results and puts them into context, considering the whole patient and using clinical judgment to apply current scientific knowledge appropriately to the individual.
CAM practitioners claim to be providing individualized treatments. Homeopaths look up symptoms like “dreams of robbers,” “sensation of coldness in the heart,” and “chills between 9 and 11 AM” in their books, and naturopaths quiz patients in great depth about their habits and preferences; but they don’t have a plausible rationale for interpreting the information they gather. And they have not been able to demonstrate better patient outcomes from using that information.
A new concept, “precision medicine,” was recently featured in UW Medicine, the alumni magazine of my alma mater, the University of Washington School of Medicine. Precision medicine strives to provide truly individualized care based on good science. It identifies the individual variations in people that make a difference in our ability to diagnose and treat accurately. Peter Byers, MD, director of the new Center for Precision Diagnostics at the University of Washington, calls it “the coolest part of medicine.” (more…)
As I write this, I am attending the 2014 meeting of the American Association for Cancer Research (AACR, Twitter hashtag #AACR14) in San Diego. Basically, it’s one of the largest meetings of basic and translational cancer researchers in the world. I try to go every year, and pretty much have succeeded since around 1998 or 1999. As an “old-timer” who’s attended at least a dozen AACR meetings and presented many abstracts, I can see various trends and observe the attitudes of researchers involved in basic research, contrasting them to that of clinicians. One difference is, as you might expect, that basic and translational researchers tend to embrace new findings and ideas much more rapidly than clinicians do. This is not unexpected because the reason scientists and clinical researchers actually do research is because they want to discover something new. Physicians who are not also researchers become physicians because they want to take care of patients. Because they represent the direct interface between (hopefully) science-based medicine and actual patients, they have a tendency to be more conservative about embracing new findings or rejecting current treatments found not to be effective.
While basic scientists are as human anyone else and therefore just as prone to be suspicious and dismissive of findings that do not jibe with their scientific world view, they can (usually) eventually be convinced by experimental observations and evidence. As I’ve said many times before, the process is messy and frequently combative, but eventually science wins out, although sometimes it takes far longer than in retrospect we think it should have, an observations frequently exploited by advocates of pseudoscience and quackery to claim that their pseudoscience or quackery must be taken seriously because “science was wrong before.” To this, I like to paraphrase Dara O’Briain’s famous adage that just because science doesn’t know everything doesn’t mean you can fill in the gaps with whatever fairy tale that you want. But I digress (although only a little). In accepting the validity of science that indicates either that a medical intervention that was commonly used either doesn’t help, doesn’t help as much as we thought it did, or can even be harmful, they have to contend with the normal human reluctance to admit to oneself that what one was doing before might not have been of value (or might have been of less value than previously believed) or that, worst of all, might have caused harm. Or, to put it differently, physicians understandably become acutely uncomfortable when faced with evidence that the benefit-risk profile of common treatment or test might not be as favorable as previously believed. Add to that the investment that various specialties have in such treatments, which lead to financial conflicts of interest (COI) and desires to protect turf (and therefore income), and negative evidence can have a hard go among clinicians.
There used to be a time when I dreaded Autism Awareness Month, which begins tomorrow. The reason was simple. Several years ago to perhaps as recently as three years ago, I could always count on a flurry of stories about autism towards the end of March and the beginning of April about autism. That in and of itself isn’t bad. Sometimes the stories were actually informative and useful. However, in variably there would be a flurry of truly aggravating stories in which the reporter, either through laziness, lack of ideas, or the desire to add some spice and controversy to his story, would cover the “vaccine angle.” Invariably, the reporter would either fall for the “false balance” fallacy, in which advocates of antivaccine pseudoscience like Barbara Loe Fisher, Jenny McCarthy, J. B. Handley, Dr. Jay Gordon, and others would be interviewed in the same story as though they expressed a viewpoint that was equally valid as that of real scientists like Paul Offit, representatives of the CDC, and the like. Even if the view that there is no good evidence that vaccines are associated with an increased risk of autism were forcefully expressed, the impression left behind would be that there was actually a scientific debate when there is not. Sometimes, antivaccine-sympathetic reporters would simply write antivaccine stories.
I could also count on the antivaccine movement to go out of its way to try to implicate vaccines as a cause of the “autism” epidemic, taking advantage of the increased media interest that exists every year around this time. Examples abound, such as five years ago when Generation Rescue issued its misinformation-laden “Fourteen Studies” website, to be followed by a propaganda tour by Jenny McCarthy and her then-boyfriend Jim Carrey visiting various media outlets to promote the antivaccine message.
A bit of good news for a change: a “Perspective” article in the New England Journal of Medicine describes how point-of-care ultrasound devices are being integrated into medical education. The wonders of modern medical technology are akin to science fiction. We don’t yet have a tricorder like “Bones” McCoy uses on Star Trek, but we are heading in that direction, and the new handheld ultrasound devices are a promising development.
The stethoscope has become iconic, a symbol of medical expertise draped proudly around the neck by doctors and other medical personnel. Before it was invented, doctors could only try to listen to a patient’s heart by direct application of ear to chest. In 1816, Laennec interposed a tube of rolled paper between ear and chest, and the stethoscope was born. It quickly became an essential tool, allowing us to hear the distinctive murmurs produced by different heart valve abnormalities, to take blood pressures, to detect the wheezing of asthma or the collapse of a lung , to hear the bruits caused by atherosclerotic narrowing of blood vessels, to detect intestinal obstructions by listening for borborygmi (I love that onomatopoeic word!).
The stethoscope allows us to hear sounds produced by the body, but sound also allows us to see inside the body. Diagnostic ultrasound has a multitude of uses. With prenatal sonograms, we can determine the sex of a fetus, watch it suck its thumb, and even take its picture for the family album. With echocardiography we can evaluate heart valves, see fluid accumulation in the pericardium, observe the thickness and motion of the heart wall, and even quantify the efficiency of the pumping process. Ultrasound lets us see clots in blood vessels and stones in the gallbladder, evaluate abdominal organs, detect cysts, screen for carotid artery narrowing and abdominal aortic aneurysms, and guide needles into the body for therapeutic and diagnostic purposes. (more…)
The last couple of weeks, I’ve made allusions to the “Bat Signal” (or, as I called it, the “Cancer Signal,” although that’s a horrible name and I need to think of a better one). Basically, when the Bat Cancer Signal goes up (hey, I like that one better, but do bats get cancer?), it means that a study or story has hit the press that demands my attention. It happened again just last week, when stories started hitting the press hot and heavy about a new study of mammography, stories with titles like Vast Study Casts Doubts on Value of Mammograms and Do Mammograms Save Lives? ‘Hardly,’ a New Study Finds, but I had a dilemma. The reason is that the stories about this new study hit the press largely last Tuesday and Wednesday, the study having apparently been released “in the wild” Monday night. People were e-mailing me and Tweeting at me the study and asking if I was going to blog it. Even Harriet Hall wanted to know if I was going to cover it. (And you know we all have a damned hard time denying such a request when Harriet makes it.) Even worse, the PR person at my cancer center was sending out frantic e-mails to breast cancer clinicians because the press had been calling her and wanted expert comment. Yikes!
What to do? What to do? My turn to blog here wasn’t for five more days, and, although I have in the past occasionally jumped my turn and posted on a day not my own, I hate to draw attention from one of our other fine bloggers unless it’s something really critical. Yet, in the blogosphere, stories like this have a short half-life. I could have written something up and posted it on my not-so-secret other blog (NSSOB, for you newbies), but I like to save studies like this to appear either first here or, at worst, concurrently with a crosspost at my NSSOB. (Guess what’s happening today?) So that’s what I ended up doing, and in a way I’m glad I did. The reason is that it gave me time to cogitate and wait for reactions. True, it’s at the risk of the study fading from the public consciousness, as it had already begun to do by Friday, but such is life.
Rats. Harriet stole what was going to be the title of this post! This is going to be something completely different than what I usually write about. Well, maybe not completely different, but different from the vast majority of my posts. As Dr. Snyder noted on Friday, it’s easy to find new woo-filled claims or dangerous, evidence-lacking trends to write about. Heck, I did it just last week, much to the continued consternation of one of our regular readers and commenters. Examining certain other health-related issues from a science-based perspective is more difficult, but I feel obligated to do it from time to time, not just for a change of pace but to stimulate the synapses and educate myself—and, I hope, you as well—about areas outside of my usual expertise.
We spend a lot of time writing about the scientific basis of medicine, clinical trials, what is and isn’t quackery, and how “complementary and alternative medicine” (CAM) subverts the scientific basis of medicine. However, SBM goes far beyond just that. At least I think of it this way. That’s why I’ve looked at issues that go more to the heart of health policy, which should be based on sound science and evidence. That evidence might take different forms than it does for determining, for example, whether Medicaid results in better health outcomes and by how much health insurance does the same. As is the case with policy issues and economics, conclusions are muddled and messy. The error bars are huge, and the number of potential confounders even huger. (more…)