Dug the Dog strikes again, as he did three weeks ago. I had a couple of ideas for a post this week, but none of them were time-sensitive or timely. Then, over the weekend, I saw a post on the antivaccine crank blog Age of Autism by Dan “Where are the Autistic Amish” Olmsted entitled Weekly Wrap: Another Medical Practice with a Sane Vaccine Schedule – and No Autism. Given the tendency towards a—shall we say?—lack of accuracy of Olmsted’s previous reporting, it’s no surprise that he’d latch on to this study. I’m also seeing it appear around other antivaccine websites. I had gotten wind of it late last week, a few of my readers having sent it to me but hadn’t decided yet whether to blog about it. Then it appeared on AoA. Thanks, Dan.
So let’s see how this study is being spun by the antivaccine movement:
When we at Age of Autism talk about ending the epidemic, the “to do” list seems almost overwhelming – funding a vax-unvaxed study, getting mercury out of flu shots, proving the HepB shot is nuts, wresting control of the agenda from pharma, fixing Vaccine Court (this time in the good sense of “fix”), establishing that biomedical treatments help kids recover, and on and on.
But there’s a shortcut to all this, and it goes straight through pediatricians’ offices. The evidence is growing that where a sane alternative to the CDC’s bloated vaccine schedule is offered, and other reasonable changes adopted, autism is either non-existent or so infrequent that it doesn’t constitute an epidemic at all.
The latest example comes from Lynchburg, Va., and the pediatric practice of Dr. Elizabeth Mumper. She noticed a frightening rise in autism in the 1990s. Concerned that vaccines and other medical interventions might be playing a role – concerned in other words that SHE was playing a role — Mumper changed course.
Fewer vaccines. Fewer antibiotics. No Tylenol. Breast-feeding. Probiotics. Good, pesticide free diets.
Since then, hundreds more children have been seen in her practice, Advocates For Children. But no more autism.
Statistics is the essential foundation for science-based medicine. Unfortunately, it’s a confusing subject that invites errors and misunderstandings. We non-statisticians could all benefit from learning more about statistics as well as trying to get a better understanding of just how much we don’t know. Most of us are not going to read a statistics textbook, but the book Dicing with Death: Chance, Risk, and Health by Stephen Senn is an excellent place to start or continue our education. Statistics can be misused to lie with numbers, but when used properly it is the indispensable discipline that allows scientists:
…to translate information into knowledge. It tells us how to evaluate evidence, how to design experiments, how to turn data into decisions, how much credence should be given to whom to what and why, how to reckon chances and when to take them.
Senn covers the whole field of statistics, including Bayesian vs. frequentist approaches, significance tests, life tables, survival analysis, the problematic but still useful meta-analysis, prior probability, likelihood, coefficients of correlation, the generalizability of results, multivariate analysis, ethics, equipoise, and a multitude of other useful topics. He includes biographical notes about the often rather curious statisticians who developed the discipline. And while he includes some mathematics out of necessity, he helpfully stars the more technical sections and chapters so they can be skipped by readers who find mathematics painful. The book is full of examples from real-life medical applications, and it is funny enough to hold the reader’s interest. (more…)
While we frequently on SBM target the worst abuses of science in medicine, it’s important to recognize that doing rigorous science is complex and mainstream scientists often fall short of the ideal. In fact, one of the advantages of exploring pseudoscience in medicine is developing a sensitive detector for errors in logic, method, and analysis. Many of the errors we point out in so-called “alternative” medicine also crop up elsewhere in medicine – although usually to a much less degree.
It is not uncommon, for example, for a paper to fail to adjust for multiple analysis – if you compare many variables you have to take that into consideration when doing the statistical analysis otherwise the probability of a chance correlation will be increased.
I discussed just yesterday on NeuroLogica the misapplication of meta-analysis – in this case to the question of whether or not CCSVI correlates with multiple sclerosis. I find this very common in the literature, essentially a failure to appreciate the limits of this particular analysis tool.
Another example comes recently from the journal Nature Neuroscience (an article I learned about from Ben Goldacre over at the Bad Science blog). Erroneous analyses of interactions in neuroscience: a problem of significance investigates the frequency of a subtle but important statistical error in high profile neuroscience journals.