Statistics is hard, often counterintuitive, and burdened with esoteric mathematical equations. Statistics classes can be boring and demanding; students might be tempted to call it “Sadistics.” Good statistics are essential to good research; unfortunately many scientists and even some statisticians are doing statistics wrong. Statistician Alex Reinhart has written a helpful book, Statistics Done Wrong: The Woefully Complete Guide, that every researcher and everyone who reads research would benefit from reading. The book contains a few graphs but is blissfully equation-free. It doesn’t teach how to calculate anything; it explains blunders in recent research and how to avoid them.
Inadequate education and self-deception
Most of us have little or no formal education in statistics and have picked up some knowledge in a haphazard fashion as we went along. Reinhart offers some discouraging facts. He says a doctor who takes one introductory statistics course would only be able to understand about a fifth of the articles in The New England Journal of Medicine. On a test of statistical methods commonly used in medicine, medical residents averaged less than 50% correct, medical school faculty averaged less than 75% correct, and even the experts who designed the study goofed: one question offered only a choice of four incorrect definitions.
There are plenty of examples of people deliberately lying with statistics, but that’s not what this book is about. It is about researchers who have fooled themselves by making errors they didn’t realize they were making. He cites Hanlon’s razor: “never attribute to malice that which is adequately explained by incompetence.” He says even conclusions based on properly done statistics can’t always be trusted, because it is trivially easy to “torture the data until it confesses.” (more…)
NOTE: Anyone who has seen several derogatory articles about me on the web and is curious about what the real story is, please read this and this.
Most scientists I know get a chuckle out of the Journal of Irreproducible Results (JIR), a humor journal that often parodies scientific papers. Back in the day, we used to chuckle at articles like “Any Eye for an Eye for an Arm and a Leg: Applied Dysfunctional Measurement” and “A Double Blind Efficacy Trial of Placebos, Extra Strength Placebos and Generic Placebos.” Unfortunately, these days, reporting on science is giving the impression that the JIR is a little too close to the truth, at least when it comes to reproduciblity, so much so that the issue even has its own name and Wikipedia entry: Replication (or reproducibility) crisis. It’s a topic I had been meaning to write about again for a while. Fortunately, A recent survey published in Nature under the somewhat clickbaity title “1,500 scientists lift the lid on reproducibility” finally prodded me to look into this question again. Before I get to the survey itself, though, I can’t help but do my usual pontificating to provide a bit of background.
The greatest strength of science is that it is self-critical. Scientists are not only critical of specific claims and the evidence for those claims, but they are critical of the process of science itself. That criticism is constructive – it is designed to make the process better, more efficient, and more reliable.
One aspect of the process of science that has received intense criticism in the last few years is an over-reliance on P-values, a specific statistical method for analyzing data. This may seem like a wonky technical point, but it actually cuts to the heart of science-based medicine. In a way the P-value is the focal point of much of what we advocate for at SBM.
Recently the American Statistical Association (ASA) put out a position paper in which they specifically warn against misuse of the P-value. This is the first time in their 177 years of existence they have felt the need to put out such a position paper. The reason for this unprecedented act was their feeling that abuse of the P-value is taking the practice of science off course, and a much needed course correction is overdue. (more…)
Increasingly people are accessing healthcare information in order to make decisions for their own health. A 2010 Pew poll found that 80% of internet users will do so for health care information. This presents a huge potential benefit, but also a significant risk.
Daniel Levitin talks about the need for public information literacy, something we also discuss frequently here on SBM. If you are accessing the internet to inform your health care decisions, then you need to know how to determine the legitimacy and trustworthiness of the websites you are visiting. There is a big difference between NaturalNews (a crank site full of misinformation and conspiracy theories) and Nature News (an outlet for one of the most prestigious science journals in the world).
Even when you can discriminate between good and bad health information websites, the challenge remains to properly interpret the scientific information to which you now have access.
October is National Chiropractic Health Month (NCHM) and chiropractors can’t resist the opportunity to overstate, obfuscate, and prevaricate in celebration.
They do this in the face of some unfortunate (for them) statistics revealed by a recent Gallup Poll. The Poll was paid for by Palmer College of Chiropractic as part of an effort to increase the chiropractic share of the health care pie. (There is also a secondary analysis of the poll in the Journal of Manipulative and Physiological Therapeutics.) We’ll get to those stats in a few minutes.
But first, in celebration of NCHM, the American Chiropractic Association (ACA) has produced a set of six graphics chiropractors can download and display. Four of them fudge on the facts. Let’s take a look at these graphics, compare them to the evidence cited in support of their claims, and see where the ACA went astray. (The ACA also hosted a twitter chat yesterday with the hashtag #PainFreeNation.)
The study cited as evidence for this graphic actually compared both manual thrust manipulation (MTM) and mechanical-assisted manipulation (MAM) to each other as well as manipulation versus usual medical care (UMC). Although MAM, such as the Activator Method, is the second most common manipulation technique used by American chiropractors, is increasing in popularity among them, and is touted to be a safe and effective alternative to MTM, this study found that MTM is more effective (at 4 weeks) than MAM and that MAM had no advantage over UMC. But you don’t see that in this graphic.
This is perhaps the first real crack in the wall for the almost-universal use of the null hypothesis significance testing procedure (NHSTP). The journal, Basic and Applied Social Psychology (BASP), has banned the use of NHSTP and related statistical procedures from their journal. They previously had stated that use of these statistical methods was no longer required but can be optional included. Now they have proceeded to a full ban.
The type of analysis being banned is often called a frequentist analysis, and we have been highly critical in the pages of SBM of overreliance on such methods. This is the iconic p-value where <0.05 is generally considered to be statistically significant.
The process of hypothesis testing and rigorous statistical methods for doing so were worked out in the 1920s. Ronald Fisher developed the statistical methods, while Jerzy Neyman and Egon Pearson developed the process of hypothesis testing. They certainly deserve a great deal of credit for their role in crafting modern scientific procedures and making them far more quantitative and rigorous.
However, the p-value was never meant to be the sole measure of whether or not a particular hypothesis is true. Rather it was meant only as a measure of whether or not the data should be taken seriously. Further, the p-value is widely misunderstood. The precise definition is:
The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true.
Ed: Doctors say he’s got a 50/50 chance at living.
Frank: Well there’s only a 10% chance of that
There are several motivations for choosing a topic about which to write. One is to educate others about a topic about which I am expert. Another motivation is amusement; some posts I write solely for the glee I experience in deconstructing a particular piece of nonsense. Another motivation, and the one behind this entry, is to educate me.
I hope that the process of writing this entry will help me to better understand a topic with which I have always had difficulties: statistics. I took, and promptly dropped, statistics 4 times a college. Once they got past the bell shaped curve derived from flipping a coin I just could not wrap my head around the concepts presented. I think the odds are against me, but I am going to attempt, and likely fail, in discussing some aspects of statistics that I want to understand better. Or, as is more likely, learn for the umpteenth time, only to be forgotten or confused in the future. (more…)
Suffer the lab rats
Elsevier has announced that they are retracting the infamous Seralini study which claimed to show that GMO corn causes cancer in laboratory rats. The retraction comes one year after the paper was published, and seems to be a response to the avalanche of criticism the study has faced. This retraction is to the anti-GMO world what the retraction of the infamous Wakefield Lancet paper was to the anti-vaccine world.
The Seralini paper was published in November 2012 in Food and Chemical Toxicology. It was immediately embraced by anti-GMO activists, and continues to be often cited as evidence that GMO foods are unhealthy. It was also immediately skewered by skeptics and more objective scientists as a fatally flawed study.
The study looked at male and female rats of the Sprague-Dawley strain of rat – a strain with a known high baseline incidence of tumors. These rats were fed regular corn mixed with various percentages of GMO corn: zero (the control groups), 11, 22, and 33%. Another group was fed GMO corn plus glyphosate (Round-Up) in their water, and a third was given just glyphosate. The authors concluded: (more…)
Dug the Dog strikes again, as he did three weeks ago. I had a couple of ideas for a post this week, but none of them were time-sensitive or timely. Then, over the weekend, I saw a post on the antivaccine crank blog Age of Autism by Dan “Where are the Autistic Amish” Olmsted entitled Weekly Wrap: Another Medical Practice with a Sane Vaccine Schedule – and No Autism. Given the tendency towards a—shall we say?—lack of accuracy of Olmsted’s previous reporting, it’s no surprise that he’d latch on to this study. I’m also seeing it appear around other antivaccine websites. I had gotten wind of it late last week, a few of my readers having sent it to me but hadn’t decided yet whether to blog about it. Then it appeared on AoA. Thanks, Dan.
So let’s see how this study is being spun by the antivaccine movement:
When we at Age of Autism talk about ending the epidemic, the “to do” list seems almost overwhelming – funding a vax-unvaxed study, getting mercury out of flu shots, proving the HepB shot is nuts, wresting control of the agenda from pharma, fixing Vaccine Court (this time in the good sense of “fix”), establishing that biomedical treatments help kids recover, and on and on.
But there’s a shortcut to all this, and it goes straight through pediatricians’ offices. The evidence is growing that where a sane alternative to the CDC’s bloated vaccine schedule is offered, and other reasonable changes adopted, autism is either non-existent or so infrequent that it doesn’t constitute an epidemic at all.
The latest example comes from Lynchburg, Va., and the pediatric practice of Dr. Elizabeth Mumper. She noticed a frightening rise in autism in the 1990s. Concerned that vaccines and other medical interventions might be playing a role – concerned in other words that SHE was playing a role — Mumper changed course.
Fewer vaccines. Fewer antibiotics. No Tylenol. Breast-feeding. Probiotics. Good, pesticide free diets.
Since then, hundreds more children have been seen in her practice, Advocates For Children. But no more autism.
Statistics is the essential foundation for science-based medicine. Unfortunately, it’s a confusing subject that invites errors and misunderstandings. We non-statisticians could all benefit from learning more about statistics as well as trying to get a better understanding of just how much we don’t know. Most of us are not going to read a statistics textbook, but the book Dicing with Death: Chance, Risk, and Health by Stephen Senn is an excellent place to start or continue our education. Statistics can be misused to lie with numbers, but when used properly it is the indispensable discipline that allows scientists:
…to translate information into knowledge. It tells us how to evaluate evidence, how to design experiments, how to turn data into decisions, how much credence should be given to whom to what and why, how to reckon chances and when to take them.
Senn covers the whole field of statistics, including Bayesian vs. frequentist approaches, significance tests, life tables, survival analysis, the problematic but still useful meta-analysis, prior probability, likelihood, coefficients of correlation, the generalizability of results, multivariate analysis, ethics, equipoise, and a multitude of other useful topics. He includes biographical notes about the often rather curious statisticians who developed the discipline. And while he includes some mathematics out of necessity, he helpfully stars the more technical sections and chapters so they can be skipped by readers who find mathematics painful. The book is full of examples from real-life medical applications, and it is funny enough to hold the reader’s interest. (more…)