Shares

Last week, I wrote one of my usual ridiculously detailed posts analyzing a recent study (Price et al) that, if science and reason ruled, would be the last nail in the coffin of the hypothesis connecting autism with the mercury-containing preservative, thimerosal, which used to be in many childhood vaccines but was phased out beginning in 1999 and disappearing in infant vaccines except for the flu vaccine by early 2002. Of course, for at least the last five years, the thimerosal-autism hypothesis has been a notion whose coffin already had so many nails pounded into it that Price et al probably had a hard time finding even a tiny area of virgin wood into which to pound even a tiny nail of a study published in an impact factor one journal, much less the spike that their study in Pediatrics represented.

Unfortunately, as we know, in the anti-vaccine movement unreason rules, and, not unexpectedly, as a result this study has changed little in the debate, the fortuitously ironic happenstance of its being released the day before Mark Blaxill and Dan Olmsted’s anti-mercury screed Generation Rescue and another of which is SafeMinds. SafeMinds, as you may recall, is the organization headed up by Sallie Bernard. As you may also recall, Bernard was originally on the external consulting committee that participated in the design of Price et al, and, before it, Thompson et al, the two of which ultimately made up a one-two punch against the mercury-autism hypothesis. When she saw that the results of Thompson et al were going against her idea and that no link between thimerosal-containing vaccines and neurodevelopmental disorders was showing up in the preliminary analyses, she resigned from the committee and started attacking Thompson et al. What surprised me was that she wasn’t ready with a criticism of Price et al when it was released.

Of pharma shills and ignorance of epidemiological study design

Queue my e-mail, wherein various crank organizations happily send me blogging material. What was sitting in it on Thursday morning was a missive from SafeMinds that linked to what SafeMinds apparently thought to be a rebuttal of Price et al. Then, right on the website of what I consider to be currently the ultimate crank anti-vaccine propaganda blog, I saw the headline “SafeMinds Response to Thimerosal and Autism Pediatrics Study.” When I read it, I realized that I had a “teachable moment,” because SafeMinds’ response to the study demonstrated such a misunderstanding of basic clinical trial design that I thought I could use it to teach our readers a bit about epidemiological studies.

But first I can’t resist pointing out that Bernard apparently decided to lead with a favorite technique of the anti-vaccine movement (not to mention much of the alternative medicine movement), namely the pharma shill gambit, which was perfectly encapsulated in the very first paragraph:

This study was funded by CDC and conducted by several parties with an interest in protecting vaccine use: CDC staff involved in vaccine research and promotion; Abt Associates, a contract research organization whose largest clients include vaccine manufacturers and the CDC’s National Immunization Program; America’s Health Insurance Plans, the trade group for the health insurance industry; and three HMOs which receive substantial funding from vaccine manufacturers to conduct vaccine licensing research.

If you can’t attack the design, execution, and conclusions effectively, then attack the funding source. I wondered why Bernard decided to lead with the pharma shill gambit. Then I read the rest of the critique, and I wondered no more. The very nature of the criticisms Bernard makes tells me that she has no understanding of what a case-control study is. As I look at her complaints, I will, hopefully, be able to show you what a case-control study is, at least its basics. Once you understand that, you’ll understand a frequent technique of the anti-vaccine movement used to attack scientific studies: Criticizing them for something they are not.

As way of background, in case you don’t remember or didn’t read my missive last week (in which case, the hopelessly arrogant surgeon within me suggests that it would be a good idea to go back and read it before proceeding further). Now, let’s look at two of SafeMinds’ most revealing criticism (after the pharma shill gambit, of course).

Revealing complaint #1:

The study sample did not allow an examination of an exposed versus an unexposed group, or even a high versus a low exposed group, but rather the study mostly examined the effect of timing of exposure on autism rates. There were virtually no subjects who were unvaccinated and few who were truly less vaccinated; rather, the low exposed group was mostly just late relative to the higher exposed group, ie, those vaccinating on time.

This criticism reveals a shocking lack of understanding of just what a case-control study is. (Well, maybe not so shocking, given the source.) Ms. Bernard is, in essence, criticizing a case-control study for not being a different kind of study. Specifically, she is criticizing this case-control study because it did not compare unvaccinated versus vaccinated children. That’s not how case-control studies work.

Here’s how case-control studies do work. A case-control study is designed to compare a group of subjects who have a condition to a group who do not (in this case, autism and autism spectrum disorders, or ASDs). The idea is to select a group of subjects with the condition and then pick a group of subjects who do not have the condition in such a way that the two groups are as much alike as possible in as many aspects that could be confounders as possible. From a practical standpoint, what investigators do is to take a study population and identify cases. They then take a random subset of cases if they can’t examine all the cases, which is usually the situation. Next, they look at the rest of the population and randomly select subjects, after which they pick subjects from that randomly selected pool in order to make the case group (those with the condition being studied) as similar to the control group (those without the condition being studied) as alike as possible in all other ways besides the condition under study by matching them for as many parameters relevant to the condition as possible.

The next step is then to look for differences in the group. For a study of this sort, the specific differences being sought is exposure to a substance thought to be causative for the condition being studied. If the cases, for instance, are found to have had a higher exposure to the substance under study, then the conclusion of the case-control study is that exposure to the substance in question is associated with the condition under study and therefore the substance in question might cause or contribute to the condition. If the exposure to the substance under study is lower in the case group than in controls, then the conclusion of the case-control study would be that higher exposure to substance under question is associated with the control group, suggesting that exposure to the substance in question is negatively correlated with the condition in question. Such a result may indicate that the substance in question is actually protective against the condition being studied. Finally, if exposure to the substance in question is found to have been the same between the groups, then the conclusion is that that substance probably has no relationship to the condition under study.

That’s exactly what this study found: that there was no difference between thimerosal exposure between cases and controls. More specifically, it found that there was a somewhat higher exposure to thimerosal among controls, hence the barely statistically significant hazard ratio suggesting a protective effect. Because the investigators did not have a biological mechanism that could explain such an effect, they looked at potential sources of bias and could find no obvious ones. Now, in light of the knowledge I’ve just imparted, take a look at a couple of comments. First, there is Jake Crosby:

“Ms. Bernard is, in essence, criticizing a case-control study for not being a different kind of study.”

That’s absurd, even in a case-control study you should be able to do basic comparisons of exposed and unexposed across the two groups.

Uh, no, Jake. Usually not. That’s not how case-control studies are usually designed or statistically powered because case-control studies are all about comparing cases with controls, not controlling exposed with unexposed.

Then there’s this complaint in the comments on fellow SBM blogger Scott Gavura’s post about this study on Science-Based Pharmacy:

If controls have identical exposure to cases a case control study will tell you nothing.

Do you accept that. It is fundamental to being able to make any comparison between cases and controls.

This comment was from someone going by the ‘nym of childhealthsafety, who kept repeating the same complaint even after several other commenters tried to explain to him what a case-control study is, as commenter Adam does here:

As I have tried very patiently to explain to you before, if controls have identical exposure to cases, it tells you that the exposure is not associated with the outcome.

It may not be the message you want to hear since it doesn’t fit in with your anti-vax prejudices, but that doesn’t mean that it’s in any way invalid.

Sallie Bernard’s complaint is only a more subtly done version of childhealthsafety’s and Jake Crosby’s complaint. All are founded on a mistaken idea of what a case-control study is. Their arguments make me suspect that they do not know the difference between a case-control study and a cohort study. A simple way of looking at the difference is that a case-control study compares subjects with a condition to those without and tries to identify factors that are different between them (in the case of Price et al, thimerosal exposure), while a cohort study looks at subjects without the condition being studied, divides them into groups based on putative risk factors for that condition, and then asks the question whether more subjects with those putative risk factors develop the condition under study. A key feature is that none of the subjects initially have the condition being studied. Both designs have strengths and weaknesses. One strength of a cohort design is that it can be undertaken prospectively, although a prospective cohort study is no longer possible for thimerosal exposure as a risk factor for autism because thimerosal was removed from childhood vaccines by early 2002, which is far longer than the interval between birth and the first symptoms of ASDs, which is usually around 3 years. In any case, basically Bernard, Jake, and childhealthsafety are criticizing a case-control study for not being a cohort study, as much as Jake tries to deny it.

But it’s worse than that. Check out Bernard’s next criticism:

Another validity problem is the effect on exposure variation of stratifying/matching by birth year and HMO. No reason was provided for matching based on year of birth since the long follow up period allowed sufficient time for all cases to be diagnosed. The matching requirements lead to two statistical problems.

I’m no epidemiologist (and if there are any epidemiologists out there interested in writing for SBM, please contact me), but even I know that matching by birth year is such an utterly accepted and conventional method of matching for case-control studies, particularly those studying children, that Bernard’s complaint is, to say the least, perverse. Matching by case year minimizes the variation between cases and controls that might be due to being raised in different years, going to the same schools in different years, or having different exposures related to being born different years. In other words, this is a non-complaint, and Bernard’s reason for it simply adds to the revelation of her complete lack of understanding of what a case-control study is:

Each of the three HMOs would buy in bulk the same vaccines for all its patients and the promotion of a new vaccine would tend to be uniform across an HMO, so that within an HMO, exposure variability is lessened. Additionally, the recommended vaccines, the formulations offered by manufacturers, and the uptake rate of new vaccines varied by year, so that within a given year, exposure variability is further reduced. The effect is that children in a given year in a given HMO would tend to receive the same vaccines.

She writes this as though it would a bad thing for the study to minimize as many differences as possible between the two groups in a case-control study! After all, the study variable is thimerosal, not vaccines. If you want to concentrate on thimerosal, then naturally you’d want to eliminate as many of the other variables as possible, including the number of vaccines each child received. Again, the hypothesis is that it is thimerosal in the vaccines that causes autism, not that vaccines cause autism (although certainly Sallie Bernard also believes that and seems unable to keep from confusing the two hypotheses in her mind). Matching by birth year is one way to help accomplish that. She also constructs a rather bizarre “what if” scenario:

The variables of time and place (HMO) are correlated with the exposure variable. Statistically, the correlation would reduce the effect of the exposure variable, as the two matching variables compete with the exposure variable to explain differences in the autism outcome. For example, say for simplicity that HMO A used vaccines in 1994 which exposed all enrolled infants up to 6 months of age with 75 mcg of mercury; the rate of ASD for 1994 births in HMO A was found to be 1 in 150. In 1995, HMO A used vaccines which exposed all enrolled infants up to 6 months of age to 150 mcg of mercury; the rate of ASD for these children rises to 1 in 100. By stratifying by year for this HMO, those children born in 1994, whether or not they had an ASD, would show identical exposures. Those with an ASD born in 1995 in HMO A would also have the same exposures as those born in 1995 in HMO A without an ASD. The association between the increased exposure and the increase in ASD can only be detected by removing the birth year variable, which otherwise masks the effect of exposure on outcomes.

Once again, Bernard misunderstands the concept of a case-control study. In a case-control study, investigators divide subjects up into cases and controls and then look for differences between them. Besides her misunderstanding of how case-control studies are done, this is one of those criticisms that sounds superficially plausible–if it weren’t for all the other correlations also tested in the various multivariate models used in this study, such as measures of birth weight, household income, maternal education, marital status, maternal and paternal age, birth order, breast feeding duration, child birth conditions including Apgar score, and indicators for birth asphyxia, respiratory distress, and hyperbilirubinemia; measures of maternal tobacco use, alcohol use, fish consumption, exposure to non-vaccine mercury sources, lead, illegal drugs, valproic acid, folic acid, and viral infections during pregnancy were created, and measures of child anemia, encephalitis, lead exposure, and pica.

Moreover, in multiinstitutional studies or studies conducted at more than one site, it is considered mandatory to compare the characteristics of the subjects enrolled at each site in order to make sure that they are comparable and can thus be used in the study. Add to that all the other subject characteristics examined, and Bernard’s complaint becomes just another smokescreen, especially since other results not reported in the Pediatrics paper suggest that autism prevalence was stable during the six years covered. In fact, if you look at the technical report, you’ll find that the authors checked the influence of HMO:

Were overall results driven by results from one particular HMO? To address this question we fit models separately to the data from the two largest HMOs and compared the results to the overall results. The exposure estimates from each of the two large HMOs are similar in direction and magnitude to the overall results. However, they were seldom statistically significant due to the smaller sample sizes obtained when modeling separately by HMO. We conclude that the overall results were not primarily driven by the results in one particular HMO.

They also controlled for study area:

Controlling the geographic area within the HMO coverage could increase the comparability of the cases, as well as make the data collection more concentrated and therefore less expensive. During creation of the sampling frame, children that were known to live more than 60 miles from an assessment clinic were excluded from the sampling frame.

Finally, they did several statistical tests to determine if the results were driven primarily by one subgroup:

In order to assess whether the results were sensitive to the influence of one or a few highly influential observations within a single matching stratum, we tried re-fitting the analysis model for the AD outcome to sequential subsets of data where, in each subset, all data from a single stratum were omitted. For example, if one or a few highly influential observations were in Stratum “2”, then results from a model where the data were omitted from that stratum would be very different from the results when the data from the stratum are included.

It is in general a good idea at least to peruse the entire technical report before making complaints like this. True, the reports are several hundred pages, but there is an incredible amount of detail about the design of the study and how the data were analyzed, far more than could be fit into any paper. It’s rare to have a study for which so much information regarding the nitty-gritty of its design and analysis are made publicly available. Another complaint is that there was a low response rate, which is the same complaint Bernard made about Thompson et al three years ago. The answer is the same now as it was three years ago, and it is also included in excruciating detail in the technical report.

If you can’t find a deficiency in a study, make one up!

Case-control studies are, by their very nature, retrospective studies. That means that they can have confounding factors resulting in false positives or false negatives. However, because it is unethical to do a randomized, double-blind clinical trial to address the question of whether thimerosal exposure due to thimerosal-containing vaccines causes autism, science-based medicine has to make do with the highest quality retrospective data that can be obtained. Price et al is simply the latest piece of data that uses retrospective data to ask whether thimerosal is safe and has come up with the answer that it thimerosal-containing vaccines did not cause an “autism epidemic.” Moreover, one thing that Price et al has done, which Bernard notices, is that is not the only study in which thimerosal exposure was associated with a decreased risk of ASD, which brings her back to the apparent protective effect from thimerosal.

Amazingly (well, not so amazingly), she doesn’t note that the authors acknowledged and discussed this result. She then constructs a scenario designed to “demonstrate” that shifts in participation in key groups in such a study can change the results. No kidding. Here’s the problem. Although Bernard does show that differences in the rate of participation in the controls based on whether they are late vaccinators or not could change the ratio of late vaccinators to have a higher percentage of “vaccinators,” this is yet another smokescreen. For one thing, she envisions identical participation rates between late and on-time vaccinators in the ASD group, while in the non-ASD group she envisions 40% participation of on-time vaccinators and only 15% participation of late vaccinators. This is, to say the least, a highly artificial and unlikely construct, but that’s what it took for her to make the numbers work. To justify these numbers, she cited a paper in which the response rate for subjects with no thimerosal exposure was 48% and those with “full exposure” was 65%. That is not a nearly three-fold difference.

In other words, Bernard had to make up a highly artificial hypothetical situation in which she came up with differences far beyond what is justified in order to make the numbers in her scenario work. Nowhere does she show that there’s any reason to suspect such a huge difference in response rates. Certainly, I could find no indication that would lead me to suspect such huge reporting differences. If that’s the best she could come up with, Price et al is a better study than I thought the first time around.

As desperate as it is, though, what Bernard does with regards to “what if” scenarios of extreme numerical prestidigitation designed to “prove” that, if the phase of the moon is right and huge percentages of parents in different groups declined to participate and did so in just the right way to cook the numbers, a harmful effect due to thimerosal can be turned into a protective effect is nothing compared to what childhealthsafety does in his criticism of the study:

It is known that children with autistic conditions have difficulty excreting mercury [some references below]. The mercury accumulates in their body tissues including the brain, unlike their non autistic counterparts. Mercury is highly neurotoxic — in parts per billion. Only infinitesimally tiny amounts can do significant damage to a developing infant brain.

Despite this being known and documented, the authors of this Pediatrics paper simply measured how much mercury went into all the children but not what did or did not come out. No information was obtained about how much mercury the autistic children accumulated in their brains compared to the amounts excreted by the non autistic comparison group of children. End result — another piece of hyped junk science.

The cases of autistic children were not matched with a comparable group of non autistic “control” children to enable a proper comparison to be made. Yet the study was supposedly a “case-control” study. For the cases to be matched to controls it would be necessary to check the controls retained mercury in the same manner as the autistic cases.

Those who have been involved in the vaccine wars as long as I have will immediately recognize this as the Legend of the “Poor Excretors.” There is, of course, no evidence for the concept that children with autism and ASD have any more difficult excreting mercury than anyone else. It’s not for nothing that this canard has been referred to as the myth of the “poor excretor.” Of course, that hasn’t stopped the pseudoscientists from trying again and again to show that there is somehow a huge difference between autistic children and neurotypical children in how they handle mercury. As far as science has been able to tell, there isn’t. Given that there is no good scientific justification to match controls with cases on the basis of mercury excretion, Price et al didn’t do it. Besides, to match children on such a basis would require that all the cases and controls had been tested for various measures of mercury excretion. Given that such tests are relegated to the realm of DAN! doctors and their pseudoscience, it would be highly unlikely that such data would be available anyway. This is, after all, a retrospective study. I am, however, honored that childhealthsafety would decide to name me in his other attack on the study.

Conclusion?

As much as I would like to think that Price et al would be the last word, given the number of large, well-designed studies that preceded it and found no association between thimerosal in vaccines and autism. This hypothesis has been thoroughly studied and found to be wanting. Indeed, beginning three years ago, the weight of the scientific evidence against the mercury-thimerosal-autism hypothesis was such that even Generation Rescue had moved from claiming that autism is a “misdiagnosis for mercury poisoning” to the vaguer (and much more difficult to falsify) hypothesis of “too many [vaccines] too soon” as a cause of autism. Yet, the mercury hypothesis is the zombie that won’t die. After having made my Zombieland-reference to the “double-tap” in my

Author