One thing about blogging once a week or so compared to my other blogging gig, which is usually close to every day, occasionally more often, is that I really can’t cover everything I want to cover for this blog. Even more so than at my not-so-super-secret other blogging gig, I have to pass on topics that could be fodder for what could be excellent to even awesome posts—or, self-congratulating hyperbole aside, at least reasonably interesting to the readers of this blog. When that happens, I can only hope that one of my co-bloggers picks up on it and gives the subject matter the treatment it cries out for. Or, sometimes, such subject matter just has to be dealth with elsewhere by me—or not at all. Even a hypercaffeinated blogger like myself has limits.
Sometimes, however, I actually get a second chance. In other words, I get a chance to revisit a topic that I passed by. Usually, this happens when something new happens that gives me an excuse to revisit the topic. So it was last of week, when I was perusing the New York Times by an oncology nurse named Theresa Brown. Her article was titled, appropriately enough, Hospitals Aren’t Hotels. It will become very apparent very quickly why in a moment. But first, let’s sample Brown’s article a bit, because it brings up an issue that is very pertinent to science-based medicine:
Today’s guest article, by By Ragnvi E. Kjellin, DVM, and Olle Kjellin, MD, PhD, was submitted to a series of veterinary journals, but none of them wanted to publish it. ScienceBasedMedicine.org is pleased to do so.
Animal chiropractic is a relatively new phenomenon that many veterinarians may know too little about. In Sweden, chiropractic was licensed for humans in 1989, but not for animals. Chiropractors claim that their field is scientific, while others consider it to be a form of ”alternative medicine” with an implausible and unsubstantiated theoretical foundation and little evidence of efficacy. Chiropractic is not taught in medical or veterinary schools.
Courses in “veterinary chiropractic” are offered by two companies in Germany. In their classes, veterinarians and human chiropractors are purposely mixed. A recent malpractice case in Sweden involved one of their students, a veterinarian who was accused of injuring a horse with chiropractic neck manipulation. That case led us to inquire into the underlying theory, clinical practices, and training of “veterinary chiropractors”.
Human chiropractic was founded in 1895 when D.D. Palmer, a grocer and magnetic healer with no medical training, decided that 95% of all diseases were due to vertebral subluxations that blocked the flow through the spinal nerves to all muscles and organs of the body, including the brain, eyes and ears. Adjusting subluxations supposedly allows the body to heal itself by “innate intelligence.” Over a century later, there is still no evidence that such subluxations or “intelligence” exists.
Mainstream medicine has always been skeptical of chiropractic1. Even some chiropractors have criticized the practices of their colleagues2,3. Several recent meta-analyses of chiropractic for various ailments4,5,6 have concluded that musculoskeletal back and possibly neck pain may benefit from spinal manipulation therapy; but the results are not superior to other treatments, and there is no evidence of benefit for other ailments.
Considerable controversy surrounds the chiropractic field. It is therefore essential that veterinarians understand the facts about chiropractic before they consider practicing it, recommending it, or even condoning it for the animals they treat.
One of the most interesting aspects of working as a community-based pharmacist is the insight you gain into the actual effectiveness of the different health interventions. You can see the most elaborate medication regimens developed, and then see what happens when the “rubber really hits the road”: when patients are expected to manage their own treatment plan. Not only do we get feedback from patients, there’s a semi-objective measure we can use — the prescription refill history.
The clinical trial, from where we derive much of our evidence on treatments, is very much an idealized environment. The relationship to the “real world” may be tenuous. Patients in trials are usually highly selected, typically those that are able to comply with the intervention planned. They may need to be free of any other diseases which could complicate evaluation. Patients that qualify for enrollment enter an environment where active monitoring is the norm, and may be far more intense than normal clinical practice. All of these factors mean that trial results may be meaningful, but not completely generalizable to the patient that may eventually be given the intervention. It’s for this reason we use the term “efficacy” to describe clinical trial results, while “effectiveness” is what we’re more interested in: those real-word effects that are far more relevant, yet more elusive to our decision-making. Efficacy measures a drug’s effect on an endpoint, and estimates risk and benefit in a particular setting. Effectiveness adds in real-world tolerance, the ability to tolerate the regimen, and all the other factors that are present when real patients take a drugs under less-than-ideal conditions. Consequently, effectiveness is a much more useful predictor of outcome than efficacy. Unfortunately, measurements of real-world effectiveness, possibly as a “phase 4” or real-world trial, are rarely conducted.
A recent study looking at acupuncture for the prevention of migraine attacks demonstrates all of the problems with acupuncture and acupuncture research that we have touched on over the years at SBM. Migraine is one indication for which there seems to be some support among mainstream practitioners. In fact the American Headache Society recently recommended acupuncture for migraines. Yet, the evidence is simply not there to support this recommendation, which, in my opinion, is a failure to understand a science-based assessment of the clinical evidence.
The recent study, like many acupuncture studies, was problematic, and was also negative. It showed that acupuncture does not work for migraines, but of course also contains the seeds of denial for those who want to believe in acupuncture. From the abstract:
We performed a multicentre, single-blind randomized controlled trial. In total, 480 patients with migraine were randomly assigned to one of four groups (Shaoyang-specific acupuncture, Shaoyang-nonspecific acupuncture, Yangming-specific acupuncture or sham acupuncture [control]). All groups received 20 treatments, which included electrical stimulation, over a period of four weeks. The primary outcome was the number of days with a migraine experienced during weeks 5-8 after randomization. Our secondary outcomes included the frequency of migraine attack, migraine intensity and migraine-specific quality of life.
Compared with patients in the control group, patients in the acupuncture groups reported fewer days with a migraine during weeks 5-8, however the differences between treatments were not significant (p > 0.05). There was a significant reduction in the number of days with a migraine during weeks 13-16 in all acupuncture groups compared with control (Shaoyang-specific acupuncture v. control: difference -1.06 [95% confidence interval (CI) -1.77 to -0.5], p = 0.003; Shaoyang-nonspecific acupuncture v. control: difference -1.22 [95% CI -1.92 to -0.52], p < 0.001; Yangming-specific acupuncture v. control: difference -0.91 [95% CI -1.61 to -0.21], p = 0.011). We found that there was a significant, but not clinically relevant, benefit for almost all secondary outcomes in the three acupuncture groups compared with the control group. We found no relevant differences between the three acupuncture groups.
I’ve already devoted more time to Protandim than it deserves. I’ve written about it twice on SBM: here and here . But I can’t resist covering a new Protandim study that not only serves as a bad example but that made me laugh.
Protandim is a mixture of 5 herbal supplements intended to upregulate the body’s own production of antioxidants. Its patent application claimed that it was useful to treat or prevent an astounding 126 diseases and medical conditions, from tinnitus to aging, from hemorrhoids to cancer. At the time of my last article, only one human study had been done. It found increases in blood test markers and interpreted them as a surrogate for increased antioxidant activity in the body, but did not even attempt to assess whether those increases corresponded to any measurable clinical benefit, for cancer or for anything else. I begged Protandim supporters not to ask me about it again until there were human clinical studies with meaningful outcomes.
Now there is finally a second human study, although still not one that qualifies as a clinical trial. Curiously, it is not listed on the company’s website. I wonder why? Perhaps because it showed Protandim didn’t work. Oops.
I don’t know what it is about the beginning of a year. I don’t know if it’s confirmation bias or real, but it sure seems that something big happens early every year in the antivaccine world. Consider. As I pointed out back in February 2009, in rapid succession Brian Deer reported that Andrew Wakefield had not only had undisclosed conflicts of interest regarding the research that he did for his now infamous 1998 Lancet paper but that he had falsified data. Then, a couple of weeks later the Special Masters weighed in, rejecting the claims of autism causation by vaccines made in three test cases about as resoundingly as is imaginable. Then, in February 2010, in rapid succession Andrew Wakefield, the hero of the antivaccine movement, was struck off the British medical register, saw his 1998 Lancet paper retracted by the editors, and was unceremoniously booted from his medical directorship of Thoughtful House, the autism quack clinic he helped to found after he fled the U.K. for the more friendly confines of Texas. Soon after that, the Special Masters weighed in again, rejecting the claims of autism causation by vaccines in the remaining test cases. Then, in January 2011, Brian Deer struck again, publishing more damaging revelations about Wakefield, referring to his work as Piltdown medicine in the British journal BMJ.
This year, things were different.
It all seemed so easy
In 2010 an article was published in the New England Journal of Medicine, Preventing Surgical-Site Infections in Nasal Carriers of Staphylococcus aureus . Patients were screened for Staphylcoccus aureus ( including MRSA, methicillin resistant Staphylococcus aureus) and those that were positive underwent a 5 day perioperative decontamination procedure with chlorhexidine baths and an antibiotic, mupirocin, in the nose. The results were impressive. Before the intervention the infection rates were 7.7 % and after the intervention it was 3.4 %. That is an impressive drop in surgical infections.
One of the orthopedic groups approached us (us being the hospital administration, pharmacy, nursing and infection control, of which I am Chair) to implement the protocol in their patients, citing a similar study on an orthopedic population. Great. It should be an easy enough intervention. I should have known better, of course, long experience has continually demonstrated that what appears to be simple never is.
First was the question as to whether the study was applicable to our patients. Resources were going to be devoted to an intervention, so going forward we had to demonstrate that the bang would be worth the buck. These are financially lean times, with cutbacks and declining reimbursement, so every expenditure of time and money needs to be justified. In the bizarro accounting of health care, not every hospital administration will include money saved in the evaluation of interventions, only the money spent. I work in a hospital system with a remarkably strong commitment to patient safety and quality, so there was little worry on that point. (more…)
Earlier this week, a reader of ours wrote to Steve and me with a request:
First off, I just want to say thank you for everything you gentlemen do. I find that your sites are extremely helpful when trying to figure out what level of information is BS, and what is real.
In short, I was wondering if either of you two would be able to refer me to a scientific or psuedo-scientific article where the abstract completely misrepresents the article or the conclusion doesn’t fit the analysis/data. The reason is that I’m writing is that I’m currently in my third year at [REDACTED], and currently I’m working on my seminar paper so I can graduate. I decided to look at whether there is a reasonable fair use argument in the reproduction of an entire scientific article and at what instances prior precedent would allow it. Inherent in the argument is that a scientific paper can’t be properly excerpted without losing vital information (or that an abstract does not adequately describe the entire paper), so complete reproduction of the article is necessary to properly convey the point.
So…at the risk of being too blatant, I’ll just say that our readers are very informed and scientifically knowledgeable (excepting the odd troll, of course). Can you help another reader out and provide references that fit this reader’s request? I can think of one, but I don’t think it’s as blatant as what he has in mind. Please list your references below. Heck, we might even be able to get a post for SBM out of this if there are some interesting papers that fit the description above.
The Dietary Supplement Health and Education Act of 1994 (DSHEA) has been aptly described here at SBM as a travesty of a mockery of a sham. The supplement industry’s slick marketing, herb adulteration due to lack of pre-market controls, Quack Miranda Warning, and the many supplements for which claims of effectiveness failed to hold up under scientific scrutiny (e.g., antioxidants, collagen, glucosamine and hoodia) have been impaled on the sharp pens of SBM posters as well.
And we’re not the only ones. Investigations of the supplement industry (or, Big Supp) by reputable institutions such as the U.S. Government Accountability Office and the Institute of Medicine have resulted in numerous recommendations to improve dietary supplement safety by, in part, strengthening the FDA’s ability to effectively regulate the industry. Many of these have gone unheeded.
A recent federal law tried to ameliorate this situation by directing the FDA to take specific steps designed to increase supplement safety. Yet the ink of President’s Obama’s signature was barely dry when a bill was proposed in Congress to gut its provisions. In fact, there are now several bills pending in Congress which would actually weaken the government’s already puny regulatory authority over supplements. Yes, things could get even worse.
One consistent theme of SBM is that the application of science to medicine is not easy. We are often dealing with a complex set of conflicting information about a complex system that is difficult to predict. That is precisely why we need to take a thorough and rigorous approach to information in order to make reliable decisions.
The same is true when applied to an individual patient. Often times we cannot make a single confident diagnosis based upon objective information. We have to be content with a diagnosis that is based partly on probability or on ruling out other possibilities. Sometimes we rely upon a so-called “therapeutic trial” to help confirm a diagnosis. If, for example, it is my clinical impression that a patient is probably having seizures, but I have no objective information to verify that (EEG and MRI scans are normal, which is often the case) I can help confirm the diagnosis by giving the patient an anti-seizure medication to see if that makes the episodes stop, or at least become less frequent. Placebo effects make therapeutic trials problematic, but if you have an objective outcome measure and a fairly dramatic response to treatment, that at least raises your confidence in the diagnosis.
We can apply the same basic principle on the population level. If a public health intervention is addressing the actual cause of one or more diseases, then we should see some objective markers of disease frequency or severity decrease over time. Putting fluoride in the public water supply decreased the incidence of tooth decay. Adding iodine to salt decreased the incidence of goiter. Fortifying milk with vitamin D decreased the incidence of rickets. However, removing thimerosal from the childhood vaccine schedule did not reduce the incidence of autism (or the rate of increase in autism diagnosis). That is because calcium deficiency causes rickets, but thimerosal (or the mercury it contains) does not cause autism.