Paul Offit has published a thoughtful essay in the most recent Journal of the American Medical Association (JAMA) in which he argues against funding research into complementary and alternative therapies (CAM). Offit is a leading critic of the anti-vaccine movement and has written popular books discrediting many of their claims, such as disproved claim for a connection between some vaccines or ingredients and risk of developing autism. In his article he mirrors points we have made here at SBM many times in the past.
Offit makes several salient points – the first being that the track record of research into CAM, mostly funded by the NCCAM, is pretty dismal.
“NCCAM officials have spent $375,000 to find that inhaling lemon and lavender scents does not promote wound healing; $750,000 to find that prayer does not cure AIDS or hasten recovery from breast-reconstruction surgery; $390,000 to find that ancient Indian remedies do not control type 2 diabetes; $700,000 to find that magnets do not treat arthritis, carpal tunnel syndrome, or migraine headaches; and $406,000 to find that coffee enemas do not cure pancreatic cancer.”
The reason for the poor track record is fairly simple to identify – by definition CAM includes treatments that are scientifically implausible, which means there is a low prior probability that they will work. If the treatments were scientifically plausible then they wouldn’t be alternative.
Hypnotherapy is the use of hypnosis as a medical intervention, usually for the treatment of pain and other subjective symptoms. It remains controversial, primarily because the evidence for its efficacy is not yet compelling, but also because it is poorly understood. This situation is not helped by the fact that it is often characterized as an “alternative” therapy, a label that can “ghettoize” an otherwise legitimate treatment modality.
What Is Hypnosis?
Any meaningful discussion of hypnosis, or any other phenomenon, needs to start with a specific, and hopefully operational, definition. If we cannot define hypnosis then it becomes impossible to meaningfully discuss it. The problem of definition plagues the science dealing with many so-called alternative therapies, such as acupuncture. Good science requires controlling for specific variables, so that we can determine which variables are having what effects. If we don’t know which variables are part of the operational definition of a specific therapy, then we cannot conduct proper studies or interpret their results.
For example, with acupuncture, in my opinion the only meaningful definition of this procedure is the placing of thin needles into specific acupuncture points in order to elicit a specific response. Research has shown, however, that acupuncture points do not exist, that placing needles at specific points is not associated with a specific outcome, and even that sticking needles through the skin (as opposed to just poking the skin superficially) does not correlate with outcome. When these variables are isolated they do not appear to contribute anything to efficacy, therefore one might conclude that acupuncture does not work. Research into acupuncture, however, often does not adequately isolate these variables from the therapeutic ritual that surrounds acupuncture, or even mixes in other modalities, such as electrical stimulation.
All scientists should be skeptics. Serious problems arise when a less-than-skeptical approach is taking to the task of discovery. Typically the result is flawed science, and for those significantly lacking in skepticism this can descend to pseudoscience and crankery. With the applied sciences, such as the clinical sciences of medicine and mental therapy, there are potentially immediate and practical implications as well.
Clinical decision making is not easy, and is subject to a wide range of fallacies and cognitive pitfalls. Clinicians can make the kinds of mental errors that we all make in our everyday lives, but with serious implications to the health of their patients. It is therefore especially important for clinicians to understand these pitfalls and avoid them – in other words, to be skeptics.
It is best to understand the clinical interaction as an investigation, at least in part. When evaluating a new patient, for example, there is a standard format to the “history of present illness,” past medical history, and the exam. But within this format the clinician is engaged in a scientific investigation, of sorts. Right from the beginning, when their patient tells them what problem they are having, they should be generating hypotheses. Most of the history taking will actually be geared toward testing those diagnostic hypotheses.
It has been a stunning triumph of marketing and propaganda that many people believe that treatments that are “natural” are somehow magically safe and effective (an error in logic known as the naturalistic fallacy). There is now widespread belief that herbal remedies are not drugs or chemicals because they are natural. The allies in Congress of those who sell such products have even passed laws that embody this fallacy – taking herbal remedies away from FDA oversight and regulating them more like food than drugs.
The other major fallacy spread by the “natural remedy” industry is that if a product has been used for a long time (hundreds or thousands of years), then it must also be safe and effective because it has stood the test of time (this fallacy is referred to as the argument from antiquity). This fallacy even has a specific regulatory term to invoke it – GRAS or “generally recognized as safe.” With food and food ingredients the FDA does not require evidence of safety if the ingredient is generally recognized as safe. This might make sense when referring to foods that have be eaten by humans for a long time. Although the logic is still dubious, it’s just practical – the FDA could not take upon itself the task of proving that every food eaten by humans has no significant negative health consequences. It is more a recognition of practicality than reality.
The Washington State Department of Health has released a statement stating that they are in the midst of a whooping cough epidemic, which will likely reach its highest levels in decades. So far this year there have been 640 cases, compared to 94 cases over the same time period last year. This is a dramatic increase. Whooping cough is a vaccine preventable disease, and so the resurgence of this infection raises questions about the efficacy of the vaccine program – specifically, to what extent is this increase due to vaccine refusal vs waning efficacy of the vaccine itself?
Whooping cough is caused by the Bordetella pertussis bacterium (a Gram-negative, aerobic coccobacillus, for those who are interested), which produce a toxin that paralyzes respiratory cells and causes inflammation. The result begins like an ordinary upper respiratory infection (a common cold) but then develops into a severe cough which can last for weeks. The name of the disease, whooping cough, comes from the sound made by the sudden inhalation after a sustained cough. The disease can be severe at any age, but is especially pernicious in infants, in whom it can cause apnea, or brief pauses in breathing. In infants less than 1 year of age half will need to be hospitalized and 1 in 100 will die.
The pertussis bacterium was first isolated in 1906 by Belgian scientists Jules Bordet and Octave Gengou. In 1939 researchers at the Michigan Department of Public Health demonstrated the efficacy of a vaccine against Bodetella pertussis. The vaccine reduced the incidence of whooping cough from 15.1 to 2.3% and reduced the severity of the illness in those who contracted it. In 1948 the whole cell pertussis vaccine was combined with vaccines for diptheria and tetanus to make the DTP vaccine.
We frequently deal with fraud and quackery on this blog, because part of our mission is to inform the public about such things, and also they are great examples for explaining the difference between legitimate and dubious medical claims. It is always our goal not just to give a pronouncement about this or that therapy, but to work through the logic and evidence so that or readers will learn how to analyze claims for themselves, or at least know when to be skeptical.
One skepticism-inducing red flag is any treatment that claims to treat a wide range of ailments, especially if those ailments are known to have difference causes and pathophysiologies. Even claiming that one treatment might be effective against all cancer is dubious, because cancer is not one disease, but a category of disease. We are fond of pointing out that there are many types and stages of cancer, and each one requires individualized treatments. As an aside, it is ironic that CAM proponents often simultaneously tout how individualized their treatment approach is, but then claim that one product or treatment can cure all cancer. Meanwhile they criticize the alleged cookie-cutter approach of mainstream medicine, which is actually producing a more and more individualized (and evidence-based) approach to such things as cancer.
In any case – my immediate response to any article or website claiming to treat most or all cancer is to be highly skeptical, but I reserve final judgment until after I read through the details. What kinds of evidence are being presented to support the claims, and what are the alleged mechanisms of action? Are those making the claims being cautious like a scientist should, or are they being promotional like a used-car salesman?
A recent study claiming a potential treatment for many types of cancer has been making the rounds. The title of the article being circulated is, One Drug to Shrink All Tumors. What made me take immediate interest in this article was that it was not on a dubious website, sensational tabloid, or even mainstream news outlet, but on the news section of the American Academy for the Advancement of Science (AAAS) website. This is a report of serious medical research. The title, I suspect, is perhaps a bit more sensational than it otherwise would have been because of a geeky nod to the “one ring to bind them all” Lord of the Rings quote. Regardless of the source and the headline – what is the science here?
In 2009, during the “Obamacare” debate that was dominating the news, Atul Gawande wrote an article in the New Yorker that was widely praised and cited, including by president Obama himself. The article is a thought-provoking discussion of why some communities in the US have much higher health care costs than other regions. I took two main conclusions from the article.
The first is the success of the Mayo model – organizing care as a team approach. The idea here is to pool optimal expertise in the care of each patient. Greater expertise leads to “more thinking and less testing,” as Gawande puts it. I agree with this. It takes expertise to be comfortable not doing a test. Often testing is ordered because a physician does not feel secure in their diagnostic assessment.
The second main conclusion was the McAllen model, a town in Texas that has double the average Medicare costs per capita in the country. Gawande concluded that these increased costs are likely due to the culture of medical practice in the region, leading to greater unnecessary care and procedures. He wrote:
The Medicare payment data provided the most detail. Between 2001 and 2005, critically ill Medicare patients received almost fifty per cent more specialist visits in McAllen than in El Paso, and were two-thirds more likely to see ten or more specialists in a six-month period. In 2005 and 2006, patients in McAllen received twenty per cent more abdominal ultrasounds, thirty per cent more bone-density studies, sixty per cent more stress tests with echocardiography, two hundred per cent more nerve-conduction studies to diagnose carpal-tunnel syndrome, and five hundred and fifty per cent more urine-flow studies to diagnose prostate troubles. They received one-fifth to two-thirds more gallbladder operations, knee replacements, breast biopsies, and bladder scopes. They also received two to three times as many pacemakers, implantable defibrillators, cardiac-bypass operations, carotid endarterectomies, and coronary-artery stents. And Medicare paid for five times as many home-nurse visits. The primary cause of McAllen’s extreme costs was, very simply, the across-the-board overuse of medicine.
Is that, however, a necessary conclusion from that data? The data support the conclusion that McAllen (the highest cost region) uses many more medical procedures than El Paso (the lowest cost region), but does that necessarily equate to “overuse” of medicine? Evidence does not support the conclusion that the population in McAllen is sicker than El Paso, but it is also possible that El Paso simply underdelivers care.
A recent study looking at acupuncture for the prevention of migraine attacks demonstrates all of the problems with acupuncture and acupuncture research that we have touched on over the years at SBM. Migraine is one indication for which there seems to be some support among mainstream practitioners. In fact the American Headache Society recently recommended acupuncture for migraines. Yet, the evidence is simply not there to support this recommendation, which, in my opinion, is a failure to understand a science-based assessment of the clinical evidence.
The recent study, like many acupuncture studies, was problematic, and was also negative. It showed that acupuncture does not work for migraines, but of course also contains the seeds of denial for those who want to believe in acupuncture. From the abstract:
We performed a multicentre, single-blind randomized controlled trial. In total, 480 patients with migraine were randomly assigned to one of four groups (Shaoyang-specific acupuncture, Shaoyang-nonspecific acupuncture, Yangming-specific acupuncture or sham acupuncture [control]). All groups received 20 treatments, which included electrical stimulation, over a period of four weeks. The primary outcome was the number of days with a migraine experienced during weeks 5-8 after randomization. Our secondary outcomes included the frequency of migraine attack, migraine intensity and migraine-specific quality of life.
Compared with patients in the control group, patients in the acupuncture groups reported fewer days with a migraine during weeks 5-8, however the differences between treatments were not significant (p > 0.05). There was a significant reduction in the number of days with a migraine during weeks 13-16 in all acupuncture groups compared with control (Shaoyang-specific acupuncture v. control: difference -1.06 [95% confidence interval (CI) -1.77 to -0.5], p = 0.003; Shaoyang-nonspecific acupuncture v. control: difference -1.22 [95% CI -1.92 to -0.52], p < 0.001; Yangming-specific acupuncture v. control: difference -0.91 [95% CI -1.61 to -0.21], p = 0.011). We found that there was a significant, but not clinically relevant, benefit for almost all secondary outcomes in the three acupuncture groups compared with the control group. We found no relevant differences between the three acupuncture groups.
One consistent theme of SBM is that the application of science to medicine is not easy. We are often dealing with a complex set of conflicting information about a complex system that is difficult to predict. That is precisely why we need to take a thorough and rigorous approach to information in order to make reliable decisions.
The same is true when applied to an individual patient. Often times we cannot make a single confident diagnosis based upon objective information. We have to be content with a diagnosis that is based partly on probability or on ruling out other possibilities. Sometimes we rely upon a so-called “therapeutic trial” to help confirm a diagnosis. If, for example, it is my clinical impression that a patient is probably having seizures, but I have no objective information to verify that (EEG and MRI scans are normal, which is often the case) I can help confirm the diagnosis by giving the patient an anti-seizure medication to see if that makes the episodes stop, or at least become less frequent. Placebo effects make therapeutic trials problematic, but if you have an objective outcome measure and a fairly dramatic response to treatment, that at least raises your confidence in the diagnosis.
We can apply the same basic principle on the population level. If a public health intervention is addressing the actual cause of one or more diseases, then we should see some objective markers of disease frequency or severity decrease over time. Putting fluoride in the public water supply decreased the incidence of tooth decay. Adding iodine to salt decreased the incidence of goiter. Fortifying milk with vitamin D decreased the incidence of rickets. However, removing thimerosal from the childhood vaccine schedule did not reduce the incidence of autism (or the rate of increase in autism diagnosis). That is because calcium deficiency causes rickets, but thimerosal (or the mercury it contains) does not cause autism.
I have previously written about psychomotor patterning – an alleged treatment for developmental delay that was developed in the 1960s. The idea has its roots in the notion of ontogeny recapitulates phylogeny, that as we develop we progress through evolutionary stages. This idea, now largely discredited, was extended to the hypothesis that in children who are developmentally delayed their neurological development could be enhanced if they were made to progress through evolutionary stages. Children were put through hours a day of passive crawling, for example, with the belief that this coax the brain into a normal developmental pathway. The treatment was studied extensively in the 1970s showing that the treatment did not work.
However, those who developed this treatment, Doman and Delecato, did not want to give up on their claim to fame simply because it didn’t work and the underlying concepts were flawed. For the last 40 years they have continued to offer the Doman-Delecato treatment for all forms of mental retardation, surviving on the fringe, all but forgotten by mainstream medicine (except by those with an interest in pathological science).
I was recently asked to look into the claims for a disorder known as pyroluria, and what I found was very similar to the history of psychomotor patterning. There was some legitimate scientific interest in this alleged condition in the 1960s. Studies in the 1970s, however, discredited the hypothesis and it was discarded as a failed hypothesis. The published literature entirely dries up by the mid 1970s. But the originators of the idea did not give up, and continue to promote the idea of pyroluria to this day.