As I write this, I am attending the 2014 meeting of the American Association for Cancer Research (AACR, Twitter hashtag #AACR14) in San Diego. Basically, it’s one of the largest meetings of basic and translational cancer researchers in the world. I try to go every year, and pretty much have succeeded since around 1998 or 1999. As an “old-timer” who’s attended at least a dozen AACR meetings and presented many abstracts, I can see various trends and observe the attitudes of researchers involved in basic research, contrasting them to that of clinicians. One difference is, as you might expect, that basic and translational researchers tend to embrace new findings and ideas much more rapidly than clinicians do. This is not unexpected because the reason scientists and clinical researchers actually do research is because they want to discover something new. Physicians who are not also researchers become physicians because they want to take care of patients. Because they represent the direct interface between (hopefully) science-based medicine and actual patients, they have a tendency to be more conservative about embracing new findings or rejecting current treatments found not to be effective.
While basic scientists are as human anyone else and therefore just as prone to be suspicious and dismissive of findings that do not jibe with their scientific world view, they can (usually) eventually be convinced by experimental observations and evidence. As I’ve said many times before, the process is messy and frequently combative, but eventually science wins out, although sometimes it takes far longer than in retrospect we think it should have, an observations frequently exploited by advocates of pseudoscience and quackery to claim that their pseudoscience or quackery must be taken seriously because “science was wrong before.” To this, I like to paraphrase Dara O’Briain’s famous adage that just because science doesn’t know everything doesn’t mean you can fill in the gaps with whatever fairy tale that you want. But I digress (although only a little). In accepting the validity of science that indicates either that a medical intervention that was commonly used either doesn’t help, doesn’t help as much as we thought it did, or can even be harmful, they have to contend with the normal human reluctance to admit to oneself that what one was doing before might not have been of value (or might have been of less value than previously believed) or that, worst of all, might have caused harm. Or, to put it differently, physicians understandably become acutely uncomfortable when faced with evidence that the benefit-risk profile of common treatment or test might not be as favorable as previously believed. Add to that the investment that various specialties have in such treatments, which lead to financial conflicts of interest (COI) and desires to protect turf (and therefore income), and negative evidence can have a hard go among clinicians.
The last couple of weeks, I’ve made allusions to the “Bat Signal” (or, as I called it, the “Cancer Signal,” although that’s a horrible name and I need to think of a better one). Basically, when the Bat Cancer Signal goes up (hey, I like that one better, but do bats get cancer?), it means that a study or story has hit the press that demands my attention. It happened again just last week, when stories started hitting the press hot and heavy about a new study of mammography, stories with titles like Vast Study Casts Doubts on Value of Mammograms and Do Mammograms Save Lives? ‘Hardly,’ a New Study Finds, but I had a dilemma. The reason is that the stories about this new study hit the press largely last Tuesday and Wednesday, the study having apparently been released “in the wild” Monday night. People were e-mailing me and Tweeting at me the study and asking if I was going to blog it. Even Harriet Hall wanted to know if I was going to cover it. (And you know we all have a damned hard time denying such a request when Harriet makes it.) Even worse, the PR person at my cancer center was sending out frantic e-mails to breast cancer clinicians because the press had been calling her and wanted expert comment. Yikes!
What to do? What to do? My turn to blog here wasn’t for five more days, and, although I have in the past occasionally jumped my turn and posted on a day not my own, I hate to draw attention from one of our other fine bloggers unless it’s something really critical. Yet, in the blogosphere, stories like this have a short half-life. I could have written something up and posted it on my not-so-secret other blog (NSSOB, for you newbies), but I like to save studies like this to appear either first here or, at worst, concurrently with a crosspost at my NSSOB. (Guess what’s happening today?) So that’s what I ended up doing, and in a way I’m glad I did. The reason is that it gave me time to cogitate and wait for reactions. True, it’s at the risk of the study fading from the public consciousness, as it had already begun to do by Friday, but such is life.
The issue of PSA screening has been in the news lately. For instance, an article in USA Today reported the latest recommendations of the US Preventive Services Task Force (USPSTF): doctors should no longer offer the PSA screening test to healthy men, because the associated risks are greater than the benefits. The story was accurate and explained the reasons for that recommendation. The comments on the article were almost uniformly negative. Readers rejected the scientific evidence and recounted stories of how PSA screening saved their lives.
It’s not surprising that the public fails to understand the issue. It’s complicated and it’s counterintuitive. We know screening detects cancers in an early stage when they are more amenable to treatment. Common sense tells us if there is a cancer present, it’s good to know about it and treat it. Unfortunately, common sense is wrong. Large numbers of men are being harmed by over-diagnosis and unnecessary treatment, and surgery may not offer any advantage over watchful waiting. (more…)
PRELUDE: THE PROBLEM WITH SCREENING
If there’s one aspect of science-based medicine (SBM) that makes it hard, particularly for practitioners, it’s SBM’s continual requirement that we adjust what we do based on new information from science and clinical trials. It’s not easy for patients, either. To lay people, SBM’s greatest strength, its continual improvement and evolution as new evidence becomes available, can appear to be inconsistency, and that seeming inconsistency is all too often an opening for quackery. Even when there isn’t an opening for quackery, it can cause a lot of confusion; some physicians are often resistant to changing their practice. It’s not for nothing that there’s an old joke in medical circles that no outdated medical practice completely dies until a new generation of physicians comes up through the ranks and the older physicians who believe in the practice either retire or die. There’s some truth in that. As I’ve said before, SBM is messy. In particular, the process of applying new science as the data become available to a problem that’s already as complicated as screening asymptomatic people for a disease in order to intervene earlier and, hopefully, save lives can be fraught with confusion and difficulties.
Certainly one of the most contentious issues in medicine over the last few years has been the issue of screening for various cancers. The main cancers that we most commonly subject populations to routine mass screening for include prostate, colon, cervical, and breast cancer. Because I’m a breast cancer surgeon, I most frequently have to deal with breast cancer screening, which means, in essence, screening with mammography. The reason is that mammography is inexpensive, well-tested, and, in general, very effective.
Or so we thought. Last week, yet another piece of evidence to muddle the picture was published in the New England Journal of Medicine (NEJM) and hit the news media in outlets such as the New York Times (Mammograms’ Value in Cancer Fight at Issue).
I see that the kerfuffle over screening for cancer has erupted again to the point where it’s found its way out of the rarified air of specialty journals to general medical journals and hence into the mainstream press.
Over the last couple of weeks, articles have appeared in newspapers such as the New York Times and Chicago Tribune, radio networks like NPR, and magazines such as TIME Magazine pointing out that a “rethinking” of routine screening for breast and prostate cancer is under way. The articles bear titles such as A Rethink On Prostate and Breast Cancer Screening, Cancer Society, in Shift, Has Concerns on Screenings, Cancers Can Vanish Without Treatment, but How?, Seniors face conflicting advice on cancer tests: Benefit-risk questions lead some to call for age cutoffs, and Rethinking the benefits of breast and prostate cancer screening. These articles were inspired by an editorial published in JAMA last month by Laura Esserman, Yiwey Shieh, and Ian Thompson entitled, appropriately enough, Rethinking Screening for Breast Cancer and Prostate Cancer. The article was a review and analysis of recent studies about the benefits of screening for breast and prostate cancer in asymptomatic populations and concluded that the benefits of large scale screening programs for breast cancer and prostate cancer tend to be oversold and that they come at a higher price than is usually acknowledged.
For regular readers of SBM, none of this should come as a major surprise, as I have been writing about just such issues for quite some time. Indeed, nearly a year and a half ago, I first wrote The early detection of cancer and improved survival: More complicated than most people think. and then followed it up with Early detection of cancer, part 2: Breast cancer and MRI. In these posts, I pointed out concepts such as lead time bias, length bias, and stage migration (a.k.a. the Will Rogers effect) that confound estimates of benefit due to screening. (Indeed, before you continue reading, I strongly suggest that you go back and read at least the first of the aforementioned two posts to review the concepts of lead time bias and length bias.) Several months later, I wrote an analysis of a fascinating study, entitling my post Do over one in five breast cancers detected by mammography alone really spontaneously regress? At the time, I was somewhat skeptical that the number of breast cancers detected by mammography that spontaneously regress was as high as 20%, but of late I’m becoming less skeptical that the number may be somewhere in that range. Even so, at the time I did not doubt that there likely is a proportion of breast cancers that do spontaneously regress and that that number is likely larger than I would have guessed before the study. Of course, the problem is that we do not currently have any way of figuring out which tumors detected by mammography will fall into the minority that do ultimately regress; so we are morally obligated to treat them all. My most recent foray into this topic was in July, when I analyzed another study that concluded that one in three breast cancers detected by screening are overdiagnosed and overtreated. That last post caused me the most angst, because women commented and wrote me asking me what to do, and I had to answer what I always answer: Follow the standard of care, which is yearly mammography over age 40. This data and these concerns have not yet altered that standard of care, and I am not going to change my practice or my general recommendations to women until a new consensus develops.
Screening for disease is a real pain. I was reminded of this by the publication of a study in BMJ the very day of the Science-Based Medicine Conference a week and a half ago. Unfortunately, between The Amaz!ng Meeting and other activities, I was too busy to give this study the attention it deserved last Monday. Given the media coverage of the study, which in essence tried to paint mammography screening for breast cancer as being either useless or doing more harm than good, I thought it was imperative for me still to write about it. Better late than never, and I was further prodded by an article that was published late last week in the New York Times about screening for cancer.
If there’s one aspect of medicine that causes more confusion among the public and even among physicians, I’d be hard-pressed to come up with one more contentious than screening for disease, be it cancer, heart disease, or whatever. The reason is that any screening test is by definition looking for disease in an asymptomatic population, which is very different from looking for a cause of a patient’s symptoms. In the latter case, the patient is already being troubled by something that is bothering him. There may or may not be a cause in the form of a disease or syndrome that is responsible for the symptoms, but the very existence of the symptoms clues the physician in that there may be something going on that requires treatment. The doctor can then narrow down range of possibilities for what may be the cause of the patient’s symptoms by taking a careful history and physical examination (which will by themselves most often lead to the diagnosis). Diagnostic tests, be they blood tests, X-rays, or other tests, then tend to be more confirmatory of the suspected diagnosis than the main evidence supporting a diagnosis.
You’ve all heard the dramatic testimonials in the media: “I had a PSA test and they found my prostate cancer early enough to treat it. The test saved my life. You should get tested too.” The subject of screening tests is one that confuses the public. On the surface, it would seem that if you can screen everyone and find abnormalities before they become symptomatic, only good would result. That’s not true. Screening tests do harm as well as good, and we need to carefully consider the trade-offs.
About half of American men over the age of 50 have had a PSA (prostate-specific antigen) screening test for prostate cancer. Recommendations for screening vary. The US Preventive Services Taskforce (USPSTF) says there is insufficient evidence to recommend screening. The American Urological Association and the American Cancer Society recommend screening. Urologists practice what they preach: 95% of male urologists over the age of 50 have been screened. But other groups like the American Academy of Family Physicians recommend discussing the pros and cons of screening with patients and letting them make an informed choice.
Two recent studies published simultaneously in The New England Journal of Medicine have added to the controversy. One concluded that screening does not reduce deaths from prostate cancer; the other concluded that it reduces deaths by 20%. (more…)
There is a new industry offering preventive health screening services direct to the public. A few years ago it was common to see ads for whole body CT scan screening at free-standing CT centers. That fad sort of faded away after numerous organizations pointed out that there was considerable radiation involved and the dangers outweighed any potential benefits.
Now what I most commonly see are ads for ultrasound screening. In fact, I am sick and tired of finding them in my mailbox and between the pages of my local newspaper. Ultrasound is certainly safe, with no radiation exposure. It sounds like it might be a good idea, but it isn’t.
Life Line Screening advertises itself as “America’s leading provider of quality health screenings.” They offer “4 tests in less than 1 hour – tests that can save your life.” They travel around the country, setting up their equipment in community centers, churches, and YMCAs. For $129 you get ultrasounds of your carotid arteries, your abdominal aorta, your legs, and your heel bone. They mail you your results 21 days later. (more…)