A new study published in PLOS Biology looks at the potential magnitude and effect of publication bias in animal trials. Essentially, the authors conclude that there is a significant file-drawer effect – failure to publish negative studies – with animal studies and this impacts the translation of animal research to human clinical trials.
SBM is greatly concerned with the technology of medical science. On one level, the methods of individual studies need to be closely analyzed for rigor and bias. But we also go to great pains to dispel the myth that individual studies can tell us much about the practice of medicine.
Reliable conclusions come from interpreting the literature as a whole, and not just individual studies. Further, the whole of the literature is greater than the sum of individual studies – there are patterns and effects in the literature itself that need to be considered.
A panels of bloggers from SBM will be taking part in the Northeast Conference on Science and Skepticism – NECSS 2010, April 17th beginning 10:00AM in New York.
There will be a 70 minute panel discussion moderated by John Snyder and featuring David Gorski, Kimball Atwood, Val Jones, and myself – Steven Novella. The topic of discussion will be the infiltration of pseudoscience into academic medicine.
This will be part of a full day of science featuring other excellent speakers, including James Randi, D. J. Grothe, Steve Mirsky, George Hrab, and Julia Galef. There will also be a live recording of the wildly popular science podcast, The Skeptics’ Guide to the Universe.
Go to www.NECSScon.org to register.
In the Wall Street Journal last week was a particularly bad article by Melinda Beck about acupuncture. While there was token skepticism (by Edzard Ernst, of course, who is the media’s go-to expert for CAM), the article credulously reported the marketing hype of acupuncture proponents.
Toward the end of the article Beck admits that “some critics” claim that acupuncture provides nothing more than a placebo effect, but this was followed by the usual canard:
“I don’t see any disconnect between how acupuncture works and how a placebo works,” says radiologist Vitaly Napadow at the Martinos center. “The body knows how to heal itself. That’s what a placebo does, too.”
That is a bold claim, and very common among CAM proponents, especially acupuncturists. As the data increasingly shows that acupuncture (and other implausible treatments) provides no benefit beyond placebo, we hear the special pleading that placebos work also.
But is that true? It turns out there is a literature on the placebo effect itself, and the evidence suggests that placebos generally do not work.
A question that arises often when discussing the optimal role of science in medicine is the precise role of plausibility, or prior probability. This is, in fact, the central concept that separates (for practical if not philosophical reasons) science-based medicine (SBM) from evidence-based medicine (EBM).
The concept featured prominently in the debate between myself and Dr. Katz at the recent Yale symposium that Kimball Atwood recently discussed. Dr. Katz’s treatment of the topic was fairly typical of CAM proponents, and consisted of a number of straw man derived from a false dichotomy, which I will describe in detail below.
I also recently received (I think by coincidence) the following question from an interested SBM reader:
What would Science Based Medicine do if H. pylori was not known, but a study showed that antibiotics given to patients with stomach ulcers eliminated symptoms? I assume that SBM wouldn’t dismiss it outright saying that it couldn’t possibly be helping because antibiotics don’t reduce stomach acid. I assume a SBM approach would do further studies trying to discover why antibiotics work. But, in the meantime, would a SBM practitioner refuse to give antibiotics to patients because he doesn’t have a scientific explanation as to why it works?
This is the exact type of scenario raised by David Katz during our discussion. He claimed that strict adherence to the principles of SBM would deprive patients of effective treatments, simply because we did not understand how they work. This is a pernicious straw man that significantly misconstrues the nature of plausibility and its relationship to the practice of medicine.
One of the basic principles of science-based medicine is that a single study rarely tells us much about any complex topic. Reliable conclusions are derived from an assessment of basic science (i.e prior probability or plausibility) and a pattern of effects across multiple clinical trials. However the mainstream media generally report each study as if it is a breakthrough or the definitive answer to the question at hand. If the many e-mails I receive asking me about such studies are representative, the general public takes a similar approach, perhaps due in part to the media coverage.
I generally do not plan to report on each study that comes out as that would be an endless and ultimately pointless exercise. But occasionally focusing on a specific study is educational, especially if that study is garnering a significant amount of media attention. And so I turn my attention this week to a recent study looking at acupuncture in major depression during pregnancy. The study concludes:
The short acupuncture protocol demonstrated symptom reduction and a response rate comparable to those observed in standard depression treatments of similar length and could be a viable treatment option for depression during pregnancy.
The House of Commons Science and Technology Committee (STC) has released a report, Evidence Check 2: Homeopathy, in which they recommend that the NHS stop funding homeopathy. The report is a rare commodity – a thoroughly science-based political document.
The committee went beyond simply stating that homeopathy does not work, and revealed impressive insight into the ethical, practical, and scientific problems caused by NHS support for an implausible and ineffective pseudoscience.
The STC formed in October of 2009, and this is their second report. The goals of the STC itself are significant step forward:
The purpose of Evidence Check is to examine how the Government uses evidence to formulate and review its policies.
We certainly can use more of that.
This week on Science-Based Medicine I wrote an article about a new study looking at the onset of autism symptoms, showing that most children who will later be diagnosed with autism will show clear signs of autism at 12 months of age, but not 6 months. This is an interesting study that sheds light on the natural course of autism. I also discussed the implications of this study for the claim that autism is caused by vaccines.
Unfortunately, I made a statement that is simply wrong. I wrote:
Many children are diagnosed between the age of 2 and 3, during the height of the childhood vaccine schedule.
First, this was a vague statement – not quantitative, and was sloppily written, giving a different impression from the one I intended. I make these kinds of errors from time to time – that is one of the perils of daily blogging about technical topics, and posting blogs without editorial or peer-review. Most blog readers understand this, and typically I will simply clarify my prose or correct mistakes when they are pointed out.
However, since I often write about topics that interest dedicated ideologues who seek to sow anti-science and confusion, sometimes these errors open the door for the flame warriors. That is what happened in this case.
Understanding the natural history of a disease is an important framework to have. It not only is critical for prognosis, but also informs us about diagnostic and screening strategies, is important to assessing interventions, and provides clues to causation.
There has been much debate about the early course of autism, specifically the earliest age at which autism may be detected. At present scientific evidence suggests that autism is dominantly genetic, and so researchers expect that there may be early signs of autism even in infancy. Traditionally, however, autism is not diagnosed until age 2-3, when parents bring their children to medical attention, or when signs are detected on routine well-child visits or in day-care.
Retrospective studies, largely involving review of home movies, have suggested that autism can be diagnosed as early as 6-12 months, suggesting that parental report is not an adequate screen because subtle signs are hard to detect without rigorous observation.
Surgeon and journalist, Atul Gawande, is getting quite a bit of deserved press and blog attention for his new book, The Checklist Manifesto: How to Get Things Right. The premise of his book is simple – checklists are an effective way to reduce error. But behind that simple message are some powerful ideas with significant implications for the culture of medicine.
One of the biggest ideas is that medicine has culture – a way of doing things and thinking about problems that subconsciously pervades the practice of medicine. This idea is not new to Gawande, but he puts it to powerful practice.
The Humble Checklist
Gawande tells not only the story of the checklist but of his personal experience designing and implementing a surgery checklist as part of a WHO project to reduce morbidity and mortality from surgery. He borrowed the idea from other industries, like aviation, that use checklists to operate complex machinery without forgetting to perform each little, but vitally important, step.
In 1998 Andrew Wakefield and 11 other co-authors published a study with the unremarkable title: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. Such a title would hardly grab a science journalist’s attention, but the small study sparked widespread hysteria about a possible connection between the mumps-measles-rubella (MMR) vaccine and autism spectrum disorder (ASD).
The study itself has not stood the test of time. The results could not be replicated by other labs. A decade of subsequent research has sufficiently cleared the MMR vaccine of any connection to ASD. The lab used to search for measles virus in the guts of the study subjects has been shown to have used flawed techniques, resulting in false positives (from the Autism Omnibus testimony, and here is a quick summary). There does not appear to be any association between autism and a GI disorder.
But it’s OK to be wrong in science. There is no expectation that every potential finding will turn out to be true – in fact it is expected that most new finding will eventually be found to be false. That’s the nature of investigating the unknown. No harm no foul.