I have a love-hate relationship with epidemiology.
On the one hand, I love how epidemiology can look for correlations in huge sample sizes, sample sizes far larger than any that we could ever have access to in clinical trials, randomized or other. I love the ability of epidemiology to generate hypotheses that can be tested in the laboratory and then later in clinical trials. Also, let’s not forget that epidemiology is sometimes the only tool available to us that can answer some questions. Such questions generally involve hypotheses that can’t be tested in a randomized clinical trial because of either ethical concerns or others. A good example of this is the question of whether vaccines cause autism. For obvious ethical reasons, it’s not permissible to perform a randomized clinical trial in which one group of children is vaccinated and one is not, and then outcomes with respect to neurodevelopmental outcomes, such as autism and autism spectrum disorders, are tracked in the two groups. The ethical concern with such a study, of course, is the potential harm that would be likely to come to the unvaccinated control group, children who would be left unprotected against common and postentially deadly communicable diaseases.
On the other hand, epidemiology is one of the messiest of sciences, and epidemiological studies are among the most difficult in all of science to perform truly rigorously. The number of factors that can confound are truly amazing, and as a result, it’s very, very easy for an epidemiological study to detect apparent correlations that are either spurious or appear much stronger than the “true” correlation. There can be confounding factors beneath confounding factors wrapped in more confounding factors, the relationships among which are not always apparent. Not infrequently, a condition can appear to be correlated with, for instance, an environmental factor, but in reality that environmental factor and the condition both correlate with a third, unknown confounder. Worse, epidemiologists know that correlation does not necessarily equal causation, but the general public, for the most part, does not, which is why, when anti-vaccine activists, for instance, point out to a rising autism prevalence and then point out that autism prevalence started rising around the same time the vaccine schedule was expanded, to the average layperson the argument sounds compelling. As a result, the design of an epidemiological study is paramount in order to account for or minimize such factors. That’s why I always said I can’t be an epidemiologist. Even though I was very good at math in college, the statistics still made my brain hurt, and I don’t have the patience for the messiness of trying to account for all the possible confounding factors.
However, for all their strengths and flaws, epidemiological studies are an integral part of science-based medicine. They are used to identify predisposing factors to diseases and conditions, environmental contributors to disease, and adverse reactions to drugs, among many other useful pieces of data. That’s why, from time to time, I like to examine epidemiological studies, particularly if they’re epidemiological studies that are getting a lot of press.
The use and abuse of autism epidemiology studies
For instance, studies like this one described in a story in the Los Angeles Times on Friday entitled Proximity to freeways increases autism risk, study finds: More research is needed, but the report suggests air pollution could be a factor: