Quackery in medicine takes many forms – use of bad science (pseudoscience), fraud, and reliance on mysticism are a few examples. Perhaps the most insidious form of dubious practice, however, is to use genuine and promising medical science to promote treatments that are simply not at the point of clinical application. New treatments, and especially new approaches to treatment, in medicine often take years or decades of research before we get to the point that we have sufficient clinical evidence of safety and effectiveness to apply the treatment in clinical practice.
One example of the premature promotion of an otherwise legitimate scientific medical treatment are the many dubious stem cell clinics promising cures for serious diseases. Stem cell science is real, but we are still in the long period of build up when we are mostly doing basic and animal research. Human clinical trials are just beginning.
Another treatment approach that is being prematurely promoted by some is nutrigenomics. The claim is that by analyzing one’s genes a personalized regimen of specific nutrients can be developed to help their genes function at optimal efficiency. One website that promises, “Genetics Based Integrative Medicine” contains this statement:
Nutrigenomics seeks to unravel these medical mysteries by providing personalized genetics-based treatment. Even so, it will take decades to confirm what we already understand; that replacing specific nutrients and/or chemicals in existing pathways allows more efficient gene expression, particularly with genetic vulnerabilities and mutations.
The money-quote is the phrase, “it will take decades to confirm what we already understand.” This is the essence of pseudoscience – using science to confirm what one already “knows.” This has it backwards, of course. Science is not used to “confirm” but to determine if a hypothesis is true or not.
There are times when the best-laid blogging plans of mice and men often go awry, and this isn’t always a bad thing. As the day on which so many Americans indulge in mass consumption of tryptophan-laden meat in order to give thanks approached, I had tentatively planned on doing an update on Stanislaw Burzynski, given that he appears to have slithered away from justice yet again. Then what to my wondering eyes should appear in my e-mail in box but news of a study that practically grabbed me by my collars, shook me, and demanded that I blog about it. As if to emphasize the point, suddenly e-mails started appearing by people who had seen stories about the study and, for reasons that I still can’t figure out after all these years, were interested on my take on the study. Yes, I realize that I’m a breast cancer surgeon and therefore considered an expert on the topic of the study, mammography. I also realize that I’ve written about it a few times before. Even so, it never ceases to amaze me, even after all these years, that anyone gives a rodential posterior about what I think. Then I started getting a couple of e-mails from people at work, and I knew that Burzynski had to wait or that he would be relegated to my not-so-secret other blog (I haven’t decided yet).
As is my usual habit, I’ll set the study up by citing how it’s being spun in the press. My local home town paper seems as good a place to begin as any, even though the story was reprinted from USA Today. The title of its coverage was Many women receiving unnecessary breast cancer treatment, study shows, with the article released the day before the study came out in the New England Journal of Medicine:
The American Academy of Family Physicians journal American Family Physician (AFP) has a feature called Journal Club that I’ve mentioned before. Three physicians examine a published article, critique it, discuss whether to believe it or not, and put it into perspective. In the September 15 issue the journal club analyzed an article that critiqued the process for developing clinical practice guidelines. It discussed how two reputable organizations, the United States Preventive Services Task Force (USPSTF) and the American Academy of Pediatrics (AAP) looked at the same evidence on lipid screening in children and came to completely different conclusions and recommendations.
The AAP recommends testing children ages 2-10 for hyperlipidemia if they have risk factors for cardiovascular disease or a positive family history. The USPSTF determined that there was insufficient evidence to recommend routine screening. How can a doctor decide which recommendation to follow? (more…)
Over the years, I’ve written a lot about “personalized medicine, mainly in the context of how the breakthroughs in genomic medicine and data pouring in from the Cancer Genome Atlas is providing the raw information necessary for developing truly personalized cancer therapy. The problem, of course, is analyzing it and figuring out how to apply it. Another problem, of course, is developing the necessary targeted drugs to attack the pathways that are identified as being dysregulated in cancer cells. Oh, and there’s that pesky evolution of resistance to antitumor therapies. Indeed, most recently, the Cancer Genome Atlas is bearing fruit in breast cancer (a study that I’ve been meaning to blog about).
One problem with modeling the pathways based on next generation sequencing data and expression profiling is testing whether therapies predicted to work from these analyses actually do work without actually testing potentially toxic drugs on patients. Cell culture is notoriously unreliable as a predictor. However, there is another way that’s intriguing. Unfortunately, as intriguing as it is, it has numerous problems, and, unfortunately, it’s being prematurely marketed to patients. Although I had heard of this technique as a research tool before, I learned about its marketing to patients when I came across an article by Andrew Pollack in the New York Times entitled Seeking Cures, Patients Enlist Mice Stand-Ins. Basically, it’s about a trend in science and among patients to use custom, “personalized’ mouse xenograft models in order to do “personalized” therapy:
One issue that keeps coming up time and time again for me is the issue of screening for cancer. Because I’m primarily a breast cancer surgeon in my clinical life, that means mammography, although many of the same issues come up time and time again in discussions of using prostate-specific antigen (PSA) screening for prostate cancer. Over time, my position regarding how to screen and when to screen has vacillated—er, um, evolved, yeah, that’s it—in response to new evidence, although the core, including my conclusion that women should definitely be screened beginning at age 50 and that it’s probably also a good idea to begin at age 40 but less frequently during that decade, has never changed. What does change is how strongly I feel about screening before 50.
My changes in emphasis and conclusions regarding screening mammography derive from my reading of the latest scientific and clinical evidence, but it’s more than just evidence that is in play here. Mammography, perhaps more than screening for any disease, is affected by more than just science. Policies regarding mammographic screening are also based on value judgments, politics, and awareness and advocacy campaigns going back decades. To some extent, this is true of many common diseases (i.e., that whether and how to screen for them are about more than just science), but in breast cancer arguably these issues are more intense. Add to that the seemingly eternal conflict between science and medicine communication, in which a simple message, repeated over and over, is required to get through, versus the messy science that tells us that the benefits of mammography are confounded by issues such as lead time and length bias that make it difficult indeed to tell if mammography—or any screening test for cancer, for that matter—saves lives and, if it does, how many. Part of the problem is that mammography tends to detect preferentially the very tumors that are less likely to be deadly, and it’s not surprising that periodically what I like to call the “mammography wars” heat up. This is not a new issue, but rather a controversy that flares up periodically. Usually this is a good thing.
And these wars just just heated up a little bit again late last week.
The U.S. is widely known to have the highest health care expenditures per capita in the world, and not just by a little, but by a lot. I’m not going to go into the reasons for this so much, other than to point out that how to rein in these costs has long been a flashpoint for debate. Indeed, most of the resistance to the Patient Protection and Affordable Care Act (PPACA), otherwise known in popular parlance as “Obamacare,” has been fueled by two things: (1) resistance to the mandate that everyone has to buy health insurance, and (2) the parts of the law designed to control the rise in health care costs. This later aspect of the PPACA has inspired cries of “Rationing!” and “Death panels!” Whenever science-based recommendations are made that suggest ways to decrease costs by reevaluating screening tests or decreasing various tests and interventions in situations where their use is not supported by scientific and clinical evidence, whether by the government or professional societies, you can count on it not being long before these cries go up, often from doctors themselves.
My perspective on this issue is that we already “ration” care. It’s just that government-controlled single payer plans and hybrid private-public universal health care plans use different criteria to ration care than our current system does. In the case of government-run health care systems, what will and will not be reimbursed is generally chosen based on evidence, politics, and cost, while in a system like the U.S. system what will and will not be reimbursed tends to be decided by insurance companies based on evidence leavened heavily with business considerations that involve appealing to the largest number of employers (who, let’s face it, are the primary customers of health insurance companies, not individuals insured by their health insurance plans). So what the debate is really about is, when boiled down to its essence, how to ration care and by how much, not whether care will be rationed. Ideally, how funding allocations are decided would be based on the best scientific evidence in a transparent fashion.
The study I’m about to discuss is anything but the best scientific evidence.
One thing about blogging once a week or so compared to my other blogging gig, which is usually close to every day, occasionally more often, is that I really can’t cover everything I want to cover for this blog. Even more so than at my not-so-super-secret other blogging gig, I have to pass on topics that could be fodder for what could be excellent to even awesome posts—or, self-congratulating hyperbole aside, at least reasonably interesting to the readers of this blog. When that happens, I can only hope that one of my co-bloggers picks up on it and gives the subject matter the treatment it cries out for. Or, sometimes, such subject matter just has to be dealth with elsewhere by me—or not at all. Even a hypercaffeinated blogger like myself has limits.
Sometimes, however, I actually get a second chance. In other words, I get a chance to revisit a topic that I passed by. Usually, this happens when something new happens that gives me an excuse to revisit the topic. So it was last of week, when I was perusing the New York Times by an oncology nurse named Theresa Brown. Her article was titled, appropriately enough, Hospitals Aren’t Hotels. It will become very apparent very quickly why in a moment. But first, let’s sample Brown’s article a bit, because it brings up an issue that is very pertinent to science-based medicine:
The New England Journal of Medicine (NEJM) is published on Thursdays. I mention this because this is one of the rare times where my owning Mondays on this blog tends to be a rather large advantage. Fridays are rotated between two or three different bloggers, and, as awesome as they are as writers, bloggers, and friends, they don’t possess the rabbit-like speed (and attention span) that I do that would allow me to see an article published in the NEJM on Thursday and get a post written about it by early Friday morning. This is, of course, a skill I have honed in my not-so-super-secret other blogging identity; so if I owned the Friday slot I could pull it off. However, the Monday slot is good enough because I’ll almost always have first crack at juicy studies and articles published in the NEJM before my fellow SBM partners in crime, unless Steve Novella managed to crank something out for his own personal blog on Friday, curse him.
My desire to be the firstest with the mostest when it comes to blogging about new articles notwithstanding, as I perused the table of contents of the NEJM this week, I was shocked to see an article that made me wonder whether the editors at NEJM might just be starting to “get it”—just a little bit—regarding “integrative” medicine. As our very own Mark Crislip put it a little more than a week ago:
If you integrate fantasy with reality, you do not instantiate reality. If you mix cow pie with apple pie, it does not make the cow pie taste better; it makes the apple pie worse.
Lately, though, I’ve been more fond of a version that doesn’t use fancy words like “instantiate”:
If you integrate fantasy with reality, you don’t make the fantasy more real. You temporarily make your reality seem more fantasy-based, but reality always wins out in the end.
The part about the cow pie needs no change, although I think ice cream works a bit better than apple pie. Your mileage may vary. Feel free to make up your own metaphor inspired by Mark’s.
In any case, in the Perspective section, I saw three articles about “patient-centered” care:
Much of the therapeutics I was taught as part of my pharmacy degree is now of historical interest only. New evidence emerges, and clinical practice change. New treatments replace old ones – sometimes because they’re demonstrably better, and sometimes because marketing trumps evidence. The same changes occurs in the over-the-counter section of the pharmacy, but it’s here marketing seems to completely dominate. There continues to be no lack of interest in vitamin supplements, despite a growing body of evidence that suggests either no benefit, or possible harm, with many products. Yet it’s the perception that these products are beneficial seem to be seem to continue to drive sales. Nowhere is this more apparent than in areas where it’s felt medical needs are not being met. I covered one aspect a few weeks ago in a post on IgG food intolerance blood tests which are clinically useless but sold widely. The diagnosis of celiac disease came up in the comments, which merits a more thorough discussion: particularly, the growing fears over gluten consumption. It reminds me of another dietary fad that seems to have peaked and faded: the fear of Candida.
It wasn’t until I left pharmacy school and started speaking with real patients that I learned we are all filled with Candida – yeast. Most chronic diseases could be traced back to candida, I was told. And it wasn’t just the customers who believed it. One particular pharmacy sold several different kits that purported to eliminate yeast in the body. But these didn’t contain antifungal drugs – most were combinations of laxative and purgatives, combined with psyllium and bentonite clay, all promising to sponge up toxins and candida and restore you to an Enhanced State of Wellness™. There was a strict diet to be followed, too: No sugar, no bread – anything it was thought the yeast would consume. While you can still find these kits for sale, the enthusiasm for them seems to have waned. Whether consumers have caught on that these kits are useless, or have abandoned them because they don’t actually treat any underlying medical issues, isn’t clear.
The trend (which admittedly is hard to quantify) seems to have shifted, now that there’s a new dietary orthodoxy to question. Yeast is out. The real enemy is gluten: consume it at your own risk. There’s a growing demand for gluten labeling, and food producers are bringing out an expanding array of gluten-free (GF) foods. This is fantastic news for those with celiac disease, an immune reaction to gluten, where total gluten avoidance is essential. Only in the past decade or so has the true prevalence of celiac disease has become clear: about 1 in 100 have the disease. With the more frequent diagnosis of celiac disease, the awareness of gluten, and the harm it can cause to some, has soared. But going gluten free isn’t just for those with celiac disease. Tennis star Novak Djokovic doesn’t have celiac disease, but went on a GF diet. Headlines like “Djokovic switched to gluten-free diet, now he’s unstoppable on court” followed. Among children, there’s the pervasive but unfounded linkage of gluten consumption with autism, popularized by Jenny McCarthy and others. Even in the absence of any undesirable symptoms, gluten is being perceived as something to be avoided. (more…)
Please note: the following refers to routine physicals and screening tests in healthy, asymptomatic adults. It does not apply to people who have been diagnosed with diseases, who have any kind of symptoms or signs, or who are at particularly high risk of certain specific diseases.
Throughout most of human history, people have consulted doctors (or shamans or other supposed providers of medical care) only when they were sick. Not too long ago, the “if it ain’t broke don’t fix it” mindset changed. It became customary for everyone to have a yearly checkup with a doctor even if they were feeling perfectly well. The doctor would look in your eyes, ears and mouth, listen to your heart and lungs with a stethoscope and poke and prod other parts of your anatomy. He would do several routine tests, perhaps a blood count, urinalysis, EKG, chest-x-ray and TB tine test. There was even an “executive physical” based on the concept that more is better if you can afford it. Perhaps the need for maintenance of cars had an influence: the annual physical was analogous to the 30,000 mile checkup on your vehicle. The assumption was that this process would find and fix any problems and insure that any disease process would be detected at an early stage where earlier treatment would improve final outcomes. It would keep your body running like a well-tuned engine and possibly save your life.
We have gradually come to realize that the routine physical did little or nothing to improve health outcomes and was largely a waste of time and money. Today the emphasis is on identifying factors that can be altered to improve outcomes. We are even seeing articles in the popular press telling the public that no medical group advises annual checkups for healthy adults. If patients see their doctor only when they have symptoms, the doctor can take advantage of those visits to update vaccinations and any indicated screening tests.