As I write this, I am attending the 2014 meeting of the American Association for Cancer Research (AACR, Twitter hashtag #AACR14) in San Diego. Basically, it’s one of the largest meetings of basic and translational cancer researchers in the world. I try to go every year, and pretty much have succeeded since around 1998 or 1999. As an “old-timer” who’s attended at least a dozen AACR meetings and presented many abstracts, I can see various trends and observe the attitudes of researchers involved in basic research, contrasting them to that of clinicians. One difference is, as you might expect, that basic and translational researchers tend to embrace new findings and ideas much more rapidly than clinicians do. This is not unexpected because the reason scientists and clinical researchers actually do research is because they want to discover something new. Physicians who are not also researchers become physicians because they want to take care of patients. Because they represent the direct interface between (hopefully) science-based medicine and actual patients, they have a tendency to be more conservative about embracing new findings or rejecting current treatments found not to be effective.
While basic scientists are as human anyone else and therefore just as prone to be suspicious and dismissive of findings that do not jibe with their scientific world view, they can (usually) eventually be convinced by experimental observations and evidence. As I’ve said many times before, the process is messy and frequently combative, but eventually science wins out, although sometimes it takes far longer than in retrospect we think it should have, an observations frequently exploited by advocates of pseudoscience and quackery to claim that their pseudoscience or quackery must be taken seriously because “science was wrong before.” To this, I like to paraphrase Dara O’Briain’s famous adage that just because science doesn’t know everything doesn’t mean you can fill in the gaps with whatever fairy tale that you want. But I digress (although only a little). In accepting the validity of science that indicates either that a medical intervention that was commonly used either doesn’t help, doesn’t help as much as we thought it did, or can even be harmful, they have to contend with the normal human reluctance to admit to oneself that what one was doing before might not have been of value (or might have been of less value than previously believed) or that, worst of all, might have caused harm. Or, to put it differently, physicians understandably become acutely uncomfortable when faced with evidence that the benefit-risk profile of common treatment or test might not be as favorable as previously believed. Add to that the investment that various specialties have in such treatments, which lead to financial conflicts of interest (COI) and desires to protect turf (and therefore income), and negative evidence can have a hard go among clinicians.
There are times when the best-laid blogging plans of mice and men often go awry, and this isn’t always a bad thing. As the day on which so many Americans indulge in mass consumption of tryptophan-laden meat in order to give thanks approached, I had tentatively planned on doing an update on Stanislaw Burzynski, given that he appears to have slithered away from justice yet again. Then what to my wondering eyes should appear in my e-mail in box but news of a study that practically grabbed me by my collars, shook me, and demanded that I blog about it. As if to emphasize the point, suddenly e-mails started appearing by people who had seen stories about the study and, for reasons that I still can’t figure out after all these years, were interested on my take on the study. Yes, I realize that I’m a breast cancer surgeon and therefore considered an expert on the topic of the study, mammography. I also realize that I’ve written about it a few times before. Even so, it never ceases to amaze me, even after all these years, that anyone gives a rodential posterior about what I think. Then I started getting a couple of e-mails from people at work, and I knew that Burzynski had to wait or that he would be relegated to my not-so-secret other blog (I haven’t decided yet).
As is my usual habit, I’ll set the study up by citing how it’s being spun in the press. My local home town paper seems as good a place to begin as any, even though the story was reprinted from USA Today. The title of its coverage was Many women receiving unnecessary breast cancer treatment, study shows, with the article released the day before the study came out in the New England Journal of Medicine:
One issue that keeps coming up time and time again for me is the issue of screening for cancer. Because I’m primarily a breast cancer surgeon in my clinical life, that means mammography, although many of the same issues come up time and time again in discussions of using prostate-specific antigen (PSA) screening for prostate cancer. Over time, my position regarding how to screen and when to screen has vacillated—er, um, evolved, yeah, that’s it—in response to new evidence, although the core, including my conclusion that women should definitely be screened beginning at age 50 and that it’s probably also a good idea to begin at age 40 but less frequently during that decade, has never changed. What does change is how strongly I feel about screening before 50.
My changes in emphasis and conclusions regarding screening mammography derive from my reading of the latest scientific and clinical evidence, but it’s more than just evidence that is in play here. Mammography, perhaps more than screening for any disease, is affected by more than just science. Policies regarding mammographic screening are also based on value judgments, politics, and awareness and advocacy campaigns going back decades. To some extent, this is true of many common diseases (i.e., that whether and how to screen for them are about more than just science), but in breast cancer arguably these issues are more intense. Add to that the seemingly eternal conflict between science and medicine communication, in which a simple message, repeated over and over, is required to get through, versus the messy science that tells us that the benefits of mammography are confounded by issues such as lead time and length bias that make it difficult indeed to tell if mammography—or any screening test for cancer, for that matter—saves lives and, if it does, how many. Part of the problem is that mammography tends to detect preferentially the very tumors that are less likely to be deadly, and it’s not surprising that periodically what I like to call the “mammography wars” heat up. This is not a new issue, but rather a controversy that flares up periodically. Usually this is a good thing.
And these wars just just heated up a little bit again late last week.
The U.S. is widely known to have the highest health care expenditures per capita in the world, and not just by a little, but by a lot. I’m not going to go into the reasons for this so much, other than to point out that how to rein in these costs has long been a flashpoint for debate. Indeed, most of the resistance to the Patient Protection and Affordable Care Act (PPACA), otherwise known in popular parlance as “Obamacare,” has been fueled by two things: (1) resistance to the mandate that everyone has to buy health insurance, and (2) the parts of the law designed to control the rise in health care costs. This later aspect of the PPACA has inspired cries of “Rationing!” and “Death panels!” Whenever science-based recommendations are made that suggest ways to decrease costs by reevaluating screening tests or decreasing various tests and interventions in situations where their use is not supported by scientific and clinical evidence, whether by the government or professional societies, you can count on it not being long before these cries go up, often from doctors themselves.
My perspective on this issue is that we already “ration” care. It’s just that government-controlled single payer plans and hybrid private-public universal health care plans use different criteria to ration care than our current system does. In the case of government-run health care systems, what will and will not be reimbursed is generally chosen based on evidence, politics, and cost, while in a system like the U.S. system what will and will not be reimbursed tends to be decided by insurance companies based on evidence leavened heavily with business considerations that involve appealing to the largest number of employers (who, let’s face it, are the primary customers of health insurance companies, not individuals insured by their health insurance plans). So what the debate is really about is, when boiled down to its essence, how to ration care and by how much, not whether care will be rationed. Ideally, how funding allocations are decided would be based on the best scientific evidence in a transparent fashion.
The study I’m about to discuss is anything but the best scientific evidence.
Please note: the following refers to routine physicals and screening tests in healthy, asymptomatic adults. It does not apply to people who have been diagnosed with diseases, who have any kind of symptoms or signs, or who are at particularly high risk of certain specific diseases.
Throughout most of human history, people have consulted doctors (or shamans or other supposed providers of medical care) only when they were sick. Not too long ago, the “if it ain’t broke don’t fix it” mindset changed. It became customary for everyone to have a yearly checkup with a doctor even if they were feeling perfectly well. The doctor would look in your eyes, ears and mouth, listen to your heart and lungs with a stethoscope and poke and prod other parts of your anatomy. He would do several routine tests, perhaps a blood count, urinalysis, EKG, chest-x-ray and TB tine test. There was even an “executive physical” based on the concept that more is better if you can afford it. Perhaps the need for maintenance of cars had an influence: the annual physical was analogous to the 30,000 mile checkup on your vehicle. The assumption was that this process would find and fix any problems and insure that any disease process would be detected at an early stage where earlier treatment would improve final outcomes. It would keep your body running like a well-tuned engine and possibly save your life.
We have gradually come to realize that the routine physical did little or nothing to improve health outcomes and was largely a waste of time and money. Today the emphasis is on identifying factors that can be altered to improve outcomes. We are even seeing articles in the popular press telling the public that no medical group advises annual checkups for healthy adults. If patients see their doctor only when they have symptoms, the doctor can take advantage of those visits to update vaccinations and any indicated screening tests.
Dr. H. Gilbert Welch has written a new book Over-diagnosed: Making People Sick in the Pursuit of Health, with co-authors Lisa Schwartz and Steven Woloshin. It identifies a serious problem, debunks medical misconceptions and contains words of wisdom.
We are healthier, but we are increasingly being told we are sick. We are labeled with diagnoses that may not mean anything to our health. People used to go to the doctor when they were sick, and diagnoses were based on symptoms. Today diagnoses are increasingly made on the basis of detected abnormalities in people who have no symptoms and might never have developed them. Overdiagnosis constitutes one of the biggest problems in modern medicine. Welch explains why and calls for a new paradigm to correct the problem. (more…)
Screening for disease is a real pain. I was reminded of this by the publication of a study in BMJ the very day of the Science-Based Medicine Conference a week and a half ago. Unfortunately, between The Amaz!ng Meeting and other activities, I was too busy to give this study the attention it deserved last Monday. Given the media coverage of the study, which in essence tried to paint mammography screening for breast cancer as being either useless or doing more harm than good, I thought it was imperative for me still to write about it. Better late than never, and I was further prodded by an article that was published late last week in the New York Times about screening for cancer.
If there’s one aspect of medicine that causes more confusion among the public and even among physicians, I’d be hard-pressed to come up with one more contentious than screening for disease, be it cancer, heart disease, or whatever. The reason is that any screening test is by definition looking for disease in an asymptomatic population, which is very different from looking for a cause of a patient’s symptoms. In the latter case, the patient is already being troubled by something that is bothering him. There may or may not be a cause in the form of a disease or syndrome that is responsible for the symptoms, but the very existence of the symptoms clues the physician in that there may be something going on that requires treatment. The doctor can then narrow down range of possibilities for what may be the cause of the patient’s symptoms by taking a careful history and physical examination (which will by themselves most often lead to the diagnosis). Diagnostic tests, be they blood tests, X-rays, or other tests, then tend to be more confirmatory of the suspected diagnosis than the main evidence supporting a diagnosis.