This is perhaps the first real crack in the wall for the almost-universal use of the null hypothesis significance testing procedure (NHSTP). The journal, Basic and Applied Social Psychology (BASP), has banned the use of NHSTP and related statistical procedures from their journal. They previously had stated that use of these statistical methods was no longer required but can be optional included. Now they have proceeded to a full ban.
The type of analysis being banned is often called a frequentist analysis, and we have been highly critical in the pages of SBM of overreliance on such methods. This is the iconic p-value where <0.05 is generally considered to be statistically significant.
The process of hypothesis testing and rigorous statistical methods for doing so were worked out in the 1920s. Ronald Fisher developed the statistical methods, while Jerzy Neyman and Egon Pearson developed the process of hypothesis testing. They certainly deserve a great deal of credit for their role in crafting modern scientific procedures and making them far more quantitative and rigorous.
However, the p-value was never meant to be the sole measure of whether or not a particular hypothesis is true. Rather it was meant only as a measure of whether or not the data should be taken seriously. Further, the p-value is widely misunderstood. The precise definition is:
The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true.
Dug the Dog strikes again, as he did three weeks ago. I had a couple of ideas for a post this week, but none of them were time-sensitive or timely. Then, over the weekend, I saw a post on the antivaccine crank blog Age of Autism by Dan “Where are the Autistic Amish” Olmsted entitled Weekly Wrap: Another Medical Practice with a Sane Vaccine Schedule – and No Autism. Given the tendency towards a—shall we say?—lack of accuracy of Olmsted’s previous reporting, it’s no surprise that he’d latch on to this study. I’m also seeing it appear around other antivaccine websites. I had gotten wind of it late last week, a few of my readers having sent it to me but hadn’t decided yet whether to blog about it. Then it appeared on AoA. Thanks, Dan.
So let’s see how this study is being spun by the antivaccine movement:
When we at Age of Autism talk about ending the epidemic, the “to do” list seems almost overwhelming – funding a vax-unvaxed study, getting mercury out of flu shots, proving the HepB shot is nuts, wresting control of the agenda from pharma, fixing Vaccine Court (this time in the good sense of “fix”), establishing that biomedical treatments help kids recover, and on and on.
But there’s a shortcut to all this, and it goes straight through pediatricians’ offices. The evidence is growing that where a sane alternative to the CDC’s bloated vaccine schedule is offered, and other reasonable changes adopted, autism is either non-existent or so infrequent that it doesn’t constitute an epidemic at all.
The latest example comes from Lynchburg, Va., and the pediatric practice of Dr. Elizabeth Mumper. She noticed a frightening rise in autism in the 1990s. Concerned that vaccines and other medical interventions might be playing a role – concerned in other words that SHE was playing a role — Mumper changed course.
Fewer vaccines. Fewer antibiotics. No Tylenol. Breast-feeding. Probiotics. Good, pesticide free diets.
Since then, hundreds more children have been seen in her practice, Advocates For Children. But no more autism.
Statistics is the essential foundation for science-based medicine. Unfortunately, it’s a confusing subject that invites errors and misunderstandings. We non-statisticians could all benefit from learning more about statistics as well as trying to get a better understanding of just how much we don’t know. Most of us are not going to read a statistics textbook, but the book Dicing with Death: Chance, Risk, and Health by Stephen Senn is an excellent place to start or continue our education. Statistics can be misused to lie with numbers, but when used properly it is the indispensable discipline that allows scientists:
…to translate information into knowledge. It tells us how to evaluate evidence, how to design experiments, how to turn data into decisions, how much credence should be given to whom to what and why, how to reckon chances and when to take them.
Senn covers the whole field of statistics, including Bayesian vs. frequentist approaches, significance tests, life tables, survival analysis, the problematic but still useful meta-analysis, prior probability, likelihood, coefficients of correlation, the generalizability of results, multivariate analysis, ethics, equipoise, and a multitude of other useful topics. He includes biographical notes about the often rather curious statisticians who developed the discipline. And while he includes some mathematics out of necessity, he helpfully stars the more technical sections and chapters so they can be skipped by readers who find mathematics painful. The book is full of examples from real-life medical applications, and it is funny enough to hold the reader’s interest. (more…)
While we frequently on SBM target the worst abuses of science in medicine, it’s important to recognize that doing rigorous science is complex and mainstream scientists often fall short of the ideal. In fact, one of the advantages of exploring pseudoscience in medicine is developing a sensitive detector for errors in logic, method, and analysis. Many of the errors we point out in so-called “alternative” medicine also crop up elsewhere in medicine – although usually to a much less degree.
It is not uncommon, for example, for a paper to fail to adjust for multiple analysis – if you compare many variables you have to take that into consideration when doing the statistical analysis otherwise the probability of a chance correlation will be increased.
I discussed just yesterday on NeuroLogica the misapplication of meta-analysis – in this case to the question of whether or not CCSVI correlates with multiple sclerosis. I find this very common in the literature, essentially a failure to appreciate the limits of this particular analysis tool.
Another example comes recently from the journal Nature Neuroscience (an article I learned about from Ben Goldacre over at the Bad Science blog). Erroneous analyses of interactions in neuroscience: a problem of significance investigates the frequency of a subtle but important statistical error in high profile neuroscience journals.