As I write this, I am attending the 2014 meeting of the American Association for Cancer Research (AACR, Twitter hashtag #AACR14) in San Diego. Basically, it’s one of the largest meetings of basic and translational cancer researchers in the world. I try to go every year, and pretty much have succeeded since around 1998 or 1999. As an “old-timer” who’s attended at least a dozen AACR meetings and presented many abstracts, I can see various trends and observe the attitudes of researchers involved in basic research, contrasting them to that of clinicians. One difference is, as you might expect, that basic and translational researchers tend to embrace new findings and ideas much more rapidly than clinicians do. This is not unexpected because the reason scientists and clinical researchers actually do research is because they want to discover something new. Physicians who are not also researchers become physicians because they want to take care of patients. Because they represent the direct interface between (hopefully) science-based medicine and actual patients, they have a tendency to be more conservative about embracing new findings or rejecting current treatments found not to be effective.
While basic scientists are as human anyone else and therefore just as prone to be suspicious and dismissive of findings that do not jibe with their scientific world view, they can (usually) eventually be convinced by experimental observations and evidence. As I’ve said many times before, the process is messy and frequently combative, but eventually science wins out, although sometimes it takes far longer than in retrospect we think it should have, an observations frequently exploited by advocates of pseudoscience and quackery to claim that their pseudoscience or quackery must be taken seriously because “science was wrong before.” To this, I like to paraphrase Dara O’Briain’s famous adage that just because science doesn’t know everything doesn’t mean you can fill in the gaps with whatever fairy tale that you want. But I digress (although only a little). In accepting the validity of science that indicates either that a medical intervention that was commonly used either doesn’t help, doesn’t help as much as we thought it did, or can even be harmful, they have to contend with the normal human reluctance to admit to oneself that what one was doing before might not have been of value (or might have been of less value than previously believed) or that, worst of all, might have caused harm. Or, to put it differently, physicians understandably become acutely uncomfortable when faced with evidence that the benefit-risk profile of common treatment or test might not be as favorable as previously believed. Add to that the investment that various specialties have in such treatments, which lead to financial conflicts of interest (COI) and desires to protect turf (and therefore income), and negative evidence can have a hard go among clinicians.
If scientific evidence guides our health decisions, we will look back at the vitamin craze of the last few decades with disbelief. Indiscriminate use is, in most cases, probably useless and potentially harmful. We are collectively throwing away billions of dollars into supplements, chasing the idea of benefits that have never materialized. Multivitamins are marketed with a veneer of science but that image is a mirage – rigorous testing doesn’t support the health claims. But I don’t think the routine use of vitamins will disappear anytime soon. It’s a skillfully-marketed panacea that about half of us buy into.
Not all vitamin and mineral supplementation is useless. They can be used appropriately, when our decisions are informed by scientific evidence: Folic acid prevents neural tube defects in the developing fetus. Vitamin B12 can reverse anemia. Vitamin D is recommended for breastfeeding babies to prevent deficiency. Vitamin K injections in newborns prevent potentially catastrophic bleeding events. But the most common reason for taking vitamins isn’t a clear need, but rather our desire to “improve overall health”. It’s deemed “primary prevention” – the belief that we’re just filling in the gaps in our diet. Others may believe that if vitamins are good, then more vitamins must be better. And there is no debate that we need dietary vitamins to live. The case for indiscriminate supplementation, however, has never been established. We’ve been led to believe, through very effective marketing, that taking vitamins is beneficial to our overall health – even if our health status is reasonably good. So if supplements truly provide real benefits, then we should be able to verify this claim by studying health effects in populations of people that consume vitamins for years at a time. Those studies have been done. Different endpoints, different study populations, and different combinations of vitamins. The evidence is clear. Routine multivitamin supplementation doesn’t offer any meaningful health benefits. The parrot is dead. (more…)
It is a triumph of marketing over evidence that millions take supplements every day. There is no question we need vitamins in our diet to live. But do we need vitamin supplements? It’s not so clear. There is evidence that our diets, even in developed countries, can be deficient in some micronutrients. But there’s also a lack of evidence to demonstrate that routine supplementation is beneficial. And there’s no convincing evidence that supplementing vitamins in the absence of deficiency is beneficial. Studies of supplements suggest that most vitamins are useless at best and harmful at worst. Yet the sales of vitamins seem completely immune to negative publicity. One negative clinical trial can kill a drug, but vitamins retain an aura of wellness, even as the evidence accumulates that they may not offer any meaningful health benefits. So why do so many buy supplements? As I’ve said before, vitamins are magic. Or more accurately, we believe this to be the case.
There can be many reasons for taking vitamins but one of the most popular I hear is “insurance” which is effectively primary prevention – taking a supplement in the absence of a confirmed deficiency or medical need with the belief we’re better off for taking it. A survey backs this up – 48% reported “to improve overall health” as the primary reason for taking vitamins. Yes, there is some vitamin and supplement use that is appropriate and science-based: Vitamin D deficiencies can occur, particularly in northern climates. Folic acid supplements during pregnancy can reduce the risk of neural tube defects. Vitamin B12 supplementation is often justified in the elderly. But what about in the absence of any clear medical need? (more…)
Why take a drug, herb or any other supplement? It’s usually because we believe the substance will do something desirable, and that we’re doing more good than harm. To be truly rational we’d carefully evaluate the expected risks and benefits, estimate the overall odds of a good outcome, and then make a decision that would weigh these factors against any costs (if relevant) to make a conclusion about value for money. But having the best available information at the time we make a decision can still mean decisions turn out to be bad ones: It can be that all relevant data isn’t made available, or it can be that new, unexpected information emerges later to change our evaluation. (Donald Rumsfeld might call them “known unknowns.”)
As unknowns become knowns, risk and benefit perspectives change. Clinical trials give a hint, but don’t tell the full safety and efficacy story. Over time, and with wider use, the true risk-benefit perspective becomes more clear, especially when large databases can be used to study effects in large populations. Epidemiology can be a powerful tool for finding unexpected consequences of treatments. But epidemiologic studies can also frustrate because they rarely determine causal relationships. That’s why I’ve been following the evolving evidence about calcium supplements with interest. Calcium supplements are taken by almost 1 in 5 women, second only to multivitamins as the most popular supplement. When you look at all supplements that contain calcium, a remarkable 43% of the (U.S.) population consumes a supplement with calcium as an ingredient. As a single-ingredient supplement, calcium is almost always taken for bone health, based on continued public health messages that our dietary intake is likely insufficient, putting women (rarely men) at risk of osteoporosis and subsequent fractures. This messaging is backed by a number of studies that have concluded that calcium supplements can reduce bone loss and the risk of fractures. Calcium has an impressive health halo, and supplement marketers and pharmaceutical companies have responded. There are pills, liquids, and even tasty chewy caramel squares embedded with calcium. It’s also fortified in foods like orange juice. Supplements are often taken as “insurance” against perceived or real dietary shortfalls, and it’s easy and convenient to take a calcium supplement daily, often driven by the perception that more is better. Few may think that there is any risk to calcium supplements. But there are now multiple safety signals that these products do have risks. And that’s cause for concern. (more…)
The American Academy of Family Physicians journal American Family Physician (AFP) has a feature called Journal Club that I’ve mentioned before. Three physicians examine a published article, critique it, discuss whether to believe it or not, and put it into perspective. In the September 15 issue the journal club analyzed an article that critiqued the process for developing clinical practice guidelines. It discussed how two reputable organizations, the United States Preventive Services Task Force (USPSTF) and the American Academy of Pediatrics (AAP) looked at the same evidence on lipid screening in children and came to completely different conclusions and recommendations.
The AAP recommends testing children ages 2-10 for hyperlipidemia if they have risk factors for cardiovascular disease or a positive family history. The USPSTF determined that there was insufficient evidence to recommend routine screening. How can a doctor decide which recommendation to follow? (more…)
The issue of PSA screening has been in the news lately. For instance, an article in USA Today reported the latest recommendations of the US Preventive Services Task Force (USPSTF): doctors should no longer offer the PSA screening test to healthy men, because the associated risks are greater than the benefits. The story was accurate and explained the reasons for that recommendation. The comments on the article were almost uniformly negative. Readers rejected the scientific evidence and recounted stories of how PSA screening saved their lives.
It’s not surprising that the public fails to understand the issue. It’s complicated and it’s counterintuitive. We know screening detects cancers in an early stage when they are more amenable to treatment. Common sense tells us if there is a cancer present, it’s good to know about it and treat it. Unfortunately, common sense is wrong. Large numbers of men are being harmed by over-diagnosis and unnecessary treatment, and surgery may not offer any advantage over watchful waiting. (more…)
Please note: the following refers to routine physicals and screening tests in healthy, asymptomatic adults. It does not apply to people who have been diagnosed with diseases, who have any kind of symptoms or signs, or who are at particularly high risk of certain specific diseases.
Throughout most of human history, people have consulted doctors (or shamans or other supposed providers of medical care) only when they were sick. Not too long ago, the “if it ain’t broke don’t fix it” mindset changed. It became customary for everyone to have a yearly checkup with a doctor even if they were feeling perfectly well. The doctor would look in your eyes, ears and mouth, listen to your heart and lungs with a stethoscope and poke and prod other parts of your anatomy. He would do several routine tests, perhaps a blood count, urinalysis, EKG, chest-x-ray and TB tine test. There was even an “executive physical” based on the concept that more is better if you can afford it. Perhaps the need for maintenance of cars had an influence: the annual physical was analogous to the 30,000 mile checkup on your vehicle. The assumption was that this process would find and fix any problems and insure that any disease process would be detected at an early stage where earlier treatment would improve final outcomes. It would keep your body running like a well-tuned engine and possibly save your life.
We have gradually come to realize that the routine physical did little or nothing to improve health outcomes and was largely a waste of time and money. Today the emphasis is on identifying factors that can be altered to improve outcomes. We are even seeing articles in the popular press telling the public that no medical group advises annual checkups for healthy adults. If patients see their doctor only when they have symptoms, the doctor can take advantage of those visits to update vaccinations and any indicated screening tests.
PRELUDE: THE PROBLEM WITH SCREENING
If there’s one aspect of science-based medicine (SBM) that makes it hard, particularly for practitioners, it’s SBM’s continual requirement that we adjust what we do based on new information from science and clinical trials. It’s not easy for patients, either. To lay people, SBM’s greatest strength, its continual improvement and evolution as new evidence becomes available, can appear to be inconsistency, and that seeming inconsistency is all too often an opening for quackery. Even when there isn’t an opening for quackery, it can cause a lot of confusion; some physicians are often resistant to changing their practice. It’s not for nothing that there’s an old joke in medical circles that no outdated medical practice completely dies until a new generation of physicians comes up through the ranks and the older physicians who believe in the practice either retire or die. There’s some truth in that. As I’ve said before, SBM is messy. In particular, the process of applying new science as the data become available to a problem that’s already as complicated as screening asymptomatic people for a disease in order to intervene earlier and, hopefully, save lives can be fraught with confusion and difficulties.
Certainly one of the most contentious issues in medicine over the last few years has been the issue of screening for various cancers. The main cancers that we most commonly subject populations to routine mass screening for include prostate, colon, cervical, and breast cancer. Because I’m a breast cancer surgeon, I most frequently have to deal with breast cancer screening, which means, in essence, screening with mammography. The reason is that mammography is inexpensive, well-tested, and, in general, very effective.
Or so we thought. Last week, yet another piece of evidence to muddle the picture was published in the New England Journal of Medicine (NEJM) and hit the news media in outlets such as the New York Times (Mammograms’ Value in Cancer Fight at Issue).
Steve Novella whimsically opined on a recent phone call that irrationality must convey a survival advantage for humans. I’m afraid he has a point.
It’s much easier to scare people than to reassure them, and we have a difficult time with objectivity in the face of a good story. In fact, our brains seem to be hard wired for bias – and we’re great at drawing subtle inferences from interactions, and making our observations fit preconceived notions. A few of us try to fight that urge, and we call ourselves scientists.
Given this context of human frailty, it’s rather unsurprising that the recent USPSTF mammogram guidelines resulted in a national media meltdown of epic proportions. Just for fun, and because David Gorski nudged me towards this topic, I’m going to review some of the key reasons why the drama was both predictable and preventable. (And for an excellent, and more detailed review of the science behind the kerfuffle, David’s recent SBM article is required reading.)
Preface: On issues such as this, I think it’s always good for me to emphasize my disclaimer, in particular:
Dr. Gorski must emphasize that the opinions expressed in his posts on Science-Based Medicine are his and his alone and that all writing for this blog is done on his own time and not in any capacity representing his place of employment. His views do not represent the opinions of his department, university, hospital, or cancer institute and should never be construed as such. Finally, his writings are meant as commentary only and are therefore not meant to be used as specific health care recommendations for individuals. Readers should consult their physicians for advice regarding specific health problems or issues that they might have.
Now, on to the post…
“Early detection saves lives.”
Remember how I started a post a year and a half ago starting out with just this statement? I did it because that is the default assumption and has been so for quite a while. It’s an eminently reasonable-sounding concept that just makes sense. As I pointed out a year and a half ago, though, the question of the benefits of the early detection of cancer is more complicated than you think. Indeed, I’ve written several posts since then on the topic of mammography and breast cancer, the most recent of which I posted a mere two weeks ago. As studies have been released and my thinking on screening for breast cancer has evolved, regular readers have had a front row seat. Through it all, I hope I’ve managed to convey some of the issues involved in screening for cancer and just how difficult they are. How to screen for breast cancer, at what age to begin screening, and how to balance the benefits, risks, and costs are controversial issues, and that controversy has bubbled up to the surface into the mainstream media and public consciousness over the last year or so.
This week, all I can say is, “Here we go again”; that is, between downing slugs of ibuprofen for the headaches some controversial new guidelines for breast cancer screening are causing many of us in the cancer field.