Over the last couple of months, I’ve noticed something about the anti-vaccine movement. Specifically, I’ve noticed that the mavens of pseudoscience that make up the movement seem to have turned their sights with a vengeance on the Hepatitis B vaccine. The reason for this new tactic, I believe, is fairly obvious. The fact that the Hep B vaccine is administered shortly after birth seems somehow to enrage the anti-vaccine movement more than just about any other vaccine. Moreover, given that, aside from maternal-child transmission when the mother is infected, hepatitis B is usually only contracted through either bloodborne contact (the sharing of needles, the administration of contaminated blood) or sexual activity, it’s very easy for anti-vaccinationists to make a superficially plausible-sounding argument that it’s not a necessary vaccine, even though there are reasonable rationales for giving it to infants. The image of sticking a needle into a newborn infant trumps that, though, at least for the anti-vaccine movement. Another possibility, suggested by Steve Novella just yesterday, is that, with the collapse under a overwhelmingly huge pile of evidence of the idea that the mercury-containing preservative thimerosal that used to be used in childhood vaccines until 2001, caused an “epidemic” of autism and the failure of the “too many too soon” slogan to convince anyone who is not already an anti-vaccinationist, the movement needed a new bogeyman to blame for autism. The hepatitis B vaccine, which was added to the pediatric vaccination schedule in the 1990s, around the right time to confuse correlation with causation when it comes to the increase in autism diagnoses (just like thimerosal) was a perfect next target, given that it’s administered shortly after birth.
Indeed, just the other day, the anti-vaccine crank groups the National Vaccine Information Center (NVIC), Talk About Curing Autism (TACA), and the anti-vaccine crank blog Age of Autism posted a call for the elimination of hepatitis B vaccination for newborns:
Washington, DC – National Vaccine Information Center and Talk About Curing Autism are calling on President Obama to order the immediate suspension of the Centers for Disease Control and Prevention recommendation of the birth dose of the Hepatitis B vaccine after two recent studies linking the Hepatitis B vaccine to functional brain damage in U.S. male newborns and infant primates. In a related development today, the United States Department of Health and Human Services, including the Health Resources and Services Administration and Centers for Disease Control and Prevention, announced that 1 in every 91 children are now diagnosed with an autism spectrum disorder as reported in the November 2009 issue of Pediatrics. Previous data released by the CDC indicated a prevalence of 1 in every 150 children affected by the disorder.
Note how AoA not-so-subtly interposed the latest information about autism prevalence with its call to eliminate the birth dose of the hepatitis B vaccine. Very clever. By doing so, it linked the two in readers’ minds, as if one had something to do with the other. There’s no good scientific evidence that the hepatitis B vaccine has anything to do with the “autism epidemic.” Meanwhile, David Kirby is up to his usual nonsense, and the resident anti-vaccine propagandist at CBS News, Sharyl Attkisson, who has been known to feed Age of Autism information on at least one occasion in the past, served up this credulous, noncritical interview with Andrew Wakefield:
The quantity of misinformation in that single six minute video is far beyond the scope of this article. Were I to start dissecting it, I would not have time to do what the purpose of this article was intended to do: To deal with the study Wakefield is hawking. That’s why I leave the dissection of this pièce de résistance of disingenuousness and misinformation as an exercise for SBM readers–after reading the rest of this post, of course. Trust me, it will help you.
At the heart of this latest propaganda onslaught by the anti-vaccine movement are two studies, one a restrospective study in humans and the other a study in monkeys, both of which the anti-vaccine movement is promoting as slam dunk evidence that the hepatitis B vaccine is causing all sorts of horrific problems. Taking both of them on in one post is too much, even for my logorrheic tendencies. So I’ll deal first with Wakefield’s monkey study and then, either later this week or sometime next week, hopefully discuss the human study.
The reason I start with the monkey study is because, so confident are the anti-vaccinationists of the study that they’ve placed the accepted manuscript on the Thoughtful House website (PDF). The study has also been posted on the anti-vaccine blog Age of Autism (PDF). That means that you can read it for yourself. (Thanks, Age of Autism and Thoughtful House! You made my work much easier!)
Wakefield’s study is entitled Delayed Acquisition of Neonatal Reflexes in newborn Primates receiving A Thimerosal-containing Hepatitis B Vaccine: Influence of gestational age and Birth weight. It was performed and written by a cast of characters that we’ve met before, including the crank who launched thousands of cases of MMR in the U.K. through his shoddy science, being in the pocket of trial lawyers, and possibly even scientific fraud. Then there’s also Laura Hewitson. We’ve seen her before as the author of a couple of abstracts that I dissected in gory detail last year. Suffice it to say that, not only was the science shoddy, but massive conflicts of interest were not disclosed. The only difference this time around is that the conflicts of interest were disclosed.
In fact, let’s look at the conflicts of interest first, using the statement straight from the manuscript:
Prior to 2005, CS and AJW acted as paid experts in MMR-related litigation on behalf of the plaintiff. LH has a child who is a petitioner in the National Vaccine Injury Compensation Program. For this reason, LH was not involved in any data collection or statistical analyses to preclude the possibility of a perceived conflict of interest.
It’s hard for me not to retort that LH (Laura Hewitson) is the first author of the paper! She designed and coordinated the study, as the manuscript itself states here:
LH and AJW designed the study but were not involved in data collection and statistical analysis. LH was also responsible for coordinating all aspects of the study.
One wonders how a researcher can be “responsible for coordinating all aspects of the study” if that researcher is not involved in data collection and analysis, one does. Hewitson is also the corresponding author, which presumably means that she wrote the manuscript, or most of the manuscript. Contrary to Blaxill’s hilariously disingenuous claim, there is not just the appearance of a conflict of interest; there is a conflict of interest, a massive conflict of interest. The same is the case with Andrew Wakefield, who not only was in pocket of trial lawyers when he did the work that led to his infamous 1998 Lancet paper that started the MMR scare in the U.K. but was recently revealed to have almost certainly falsified data. Wakefield, of all people, has a lot to gain if there were any work that supported his belief that vaccines cause autism. In fact, if I were to use the same criteria that Age of Autism did when it automatically labeled any study with any pharmaceutical company underwriting whatsoever as hopelessly biased in its Fourteen Studies website, I could stop right here and dismiss Wakefield’s current monkey study as so hopelessly the result of a conflict of interest that I don’t even need to analyze it.
Fortunately, unlike the anti-vaccine propagandists responsible for Generation Rescue, the Age of Autism, and Fourteen Studies, that’s not how I roll. I did, however, have a hearty laugh at AoA’s attempt to justify the blatant conflict of interest thusly:
One likely tactic of critics of the study will include attempts to nullify the evidence based on the alleged bias of those involved. For one, the study is privately funded and acknowledges some well known autism advocates as financial contributors. These include the Johnson family (Jane Johnson is co-author of Changing the Course of Autism, a member of the Board of Directors of Thoughtful House and Director of Defeat Autism Now!), SafeMinds, the Autism Research Institute and Elizabeth Birt. Although all of these groups make clear their research interest is vaccine safety, they are frequently attacked for being “anti-vaccine”, an epithet that will almost certainly be hurled again here.
If the shoe fits…
After all, Thoughtful House is the place where Andrew Wakefield plies his pseudoscience on autistic children; Defeat Autism Now! is a cesspit of autism pseudoscience and quackery largely based on the discredited idea that vaccines cause autism, as are the Autism Research Institute and SafeMinds.
Even more amusing was this:
The most aggressive attacks, however, will likely be reserved for the study authors. The basis of these attacks is best anticipated by the following conflict of interest disclosure in the published paper. “Prior to 2005, [Carol Stott] and [Andrew Wakefield] acted as paid experts in MMR-related litigation on behalf of the plaintiff. [Laura Hewitson] has a child who is a petitioner in the National Vaccine Injury Compensation Program. For this reason, [Hewitson] was not involved in any data collection or statistical analyses to preclude the possibility of a perceived conflict of interest.”
Isn’t the hypocrisy breathtaking? To me, it’s truly astounding! The anti-vaccine movement in general and AoA in particular go out of their way to attack any investigator who does a vaccine study that fails to find a link between vaccines and autism or other neurodevelopmental outcomes. Inevitably, they use any hint of research funding, past or present, by pharmaceutical companies to paint the investigators as hopelessly biased. They relentlessly attack, for example, Paul Offit as the Dark Lord of Vaccination–Satan Incarnate with syringes!–because he invented an effective vaccine and made money selling it to a pharmaceutical company (along with his university, it should be added). In Fourteen Studies, AoA and Generation Rescue slimed every investigator who received a penny of money from a pharmaceutical company. In fact, Generation Rescue defined conflicts of interest this way:
We considered a scientist employed by a vaccine maker or a study sponsored by a vaccine maker to have the highest degree of conflict, with a public health organization (like the CDC) to be the second-worst.
Based on some of the things that Generation Rescue said about those “Fourteen Studies,” it also appeared to define the mere fact of being funded through grants from the CDC, NIH, American Academy of Pediatrics, or even the Canadian Institutes for Health Research as being a hopeless conflict of interest. Scientists who have had NIH grants (such as myself) or grants from any of these other organizations know just how ridiculous considering that particular funding source to be a horrific conflict of interest is. The bottom line is obvious. It’s a conflict of interest only if Generation Rescue says it is. Accepting funding from a pharmaceutical company? It’s a conflict of interest. No doubt about it. Real researchers would define it so. But there’s more to a conflict of interest than just where one’s research funding comes from or what companies one might work for. Not to Mark Blaxill. To Mark Blaxill and the Age of Autism, being one of the complainants in the Autism Omnibus and being funded by organizations whose purpose is to demonstrate that vaccines somehow cause autism and other neurodevelopmental problems poses no problems with regards to being a conflict of interest. None at all. Neither is Andrew Wakefield’s history of being in the pocket of trial lawyers at the time he did his “research” (and I do use the term loosely) that led to his infamous 1998 Lancet paper.
In fact, it’s not only a non-issue, it’s a sign of virtue to have accepted funds from anti-vaccine groups like SafeMinds. That’s because anti-vaccine advocates like Blaxill see themselves on the side of angels and think that they could never, ever have their objectivity affected by the funding source, having an autistic child, or being part of any legal action seeking compensation for “vaccine injury,” which, by the way, would definitely be helped by apparent scientific evidence showing that vaccines can cause autism or other neurodevelopmental disorders. One notes that one of the things the previous monkey study published as an abstract by Hewitson and Wakefield got dinged for in the blogosphere was that no conflicts of interest were reported. Poor Mark seems really peeved at the criticism Hewitson justly received for that little ethical lapse last year.
On to the study. The first thing I always try to figure out whenever reading any study is a simple question: What is the hypothesis being tested? A good study explicitly states its hypotheses in no uncertain terms. Not this one. This is the closest I could find to the study hypothesis:
Here we examine, in a prospective, controlled, observer-blinded study, the development of neonatal reflexes in infant rhesus macaques after a single dose of Th-containing HB vaccine given within 24 hours of birth, following the US childhood immunization schedule (1991-1999). The rhesus macaque is used in preclinical vaccine neurotoxicity testing and displays complex early neurobehavioral and developmental processes that are well characterized (reviewed by ).
Of course, to anyone who’s been involved in dealing with the anti-vaccine movement, one thing that’s very clear is that the subtext behind this is the unsinkable rubber duck of a belief among the anti-vaccine movement that, somehow, someway, either vaccines or mercury in vaccines causes autism. An inconvenient fact is that there has been no thimerosal in early childhood vaccines other than the flu vaccine since late 2001, but that doesn’t stop the anti-vaccine movement. I suspect that the reviewers of this article were probably blissfully ignorant of this context and concentrated solely on the methodology. Had they known, no doubt they would have asked some uncomfortable questions in their reviews. Of course, they would have no way of knowing that this study is in fact more of a propaganda tool than anything else. One thing that needs to be emphasized is that there really is no good primate model of autism, at least not that I’m aware of. That’s why Hewitson and Wakefield resorted to looking at infant reflexes, even though it’s not even clear whether these reflexes are in the least bit relevant to humans. Several readers have informed me that the primitive reflexes studied by Wakefield and Hewitson in this study are present at birth in humans.
Another question that needs to be asked. Why did the investigators look at thimerosal-containing hepatitis B vaccination? There’s no thimerosal in the hepatitis B vaccine anymore and hasn’t been since 2001. In fact, if you read the methods section of the paper, you’ll see that Hewitson et al added thimerosal to Recombivax HB (Merck) in order to recreate that thimerosal feeling from the 1990s. Why on earth would they do something like that? Especially since the authors state in the conclusion that the study design “was not able to determine whether it was the vaccine per se, the exposure to thimerosal, or a combination of both, that caused these effects”? I will suggest a possible reason before the end of this discussion.
Before I get to the effects Wakefield and Hewitson supposedly observed, let’s just consider something else. When I read this study, there was something that set my skeptical antennae twitching fiercely. Remember the abstracts I discussed last year? Let’s take a trip down memory lane and read what I wrote back then:
What first leaps to mind in looking at the study is that there are 13 monkeys in the “vaccine” group and only three in the control group. No explanation is given for why there are such unequal numbers. Similarly, there is no mention of how the monkeys were assigned to one group or the other (randomization, anyone?), whether the experimenters were blinded to experimental group and which shots were vaccine or placebo, whether the monkeys were weight- and age-matched, or any of a number of other controls that careful researchers would do. Right off the bat, from the small numbers (particularly with only three monkeys in the control group), I can say that the study almost certainly doesn’t have the statistical power to find much of anything with confidence.
Now, let’s look at how many monkeys in this study: Thirteen receiving the hepatitis B vaccine plus added thimerosal. Doesn’t that seem rather–shall we say?–coincidental, convenient, even? There were also three animals receiving no injection and four receiving a saline placebo. Sound familiar? It should. There were three controls receiving no injection and four receiving saline placebo. Why do I bring this up? Remember, the abstract from last year described the monkeys as undergoing the “entire vaccination schedule” (actually, a version of the entire U.S. vaccination schedule, with the vaccination doses moved closer together to try to make the times the monkeys received various vaccines supposedly equivalent to the same physiological and developmental age when humans receive the same vaccines in the vaccination schedule). The inevitable consequence of this, of course, is that the monkeys received a lot of vaccines in a much shorter time period than human babies do. Remember how, as a result, the results of these abstracts were were portrayed as showing that vaccinated monkeys “exhibited autism-like symptoms”:
The first research project to examine effects of the total vaccine load received by children in the 1990s has found autism-like signs and symptoms in infant monkeys vaccinated the same way. The study’s principal investigator, Laura Hewitson from the University of Pittsburgh, reports developmental delays, behavior problems and brain changes in macaque monkeys that mimic “certain neurological abnormalities of autism.”
The findings are being reported Friday and Saturday at a major international autism conference in London.
Although couched in scientific language, Hewitson’s findings are explosive. They suggest, for the first time, that our closest animal cousins develop characteristics of autism when subjected to the same immunizations – such as the MMR shot — and vaccine formulations – such as the mercury preservative thimerosal — that American children received when autism diagnoses exploded in the 1990s.
Is it just me, or does this latest study strike you as being merely a subset of a study that’s already done? Given the similarity of the description of the study described in the manuscript, in which hepatitis B vaccine was even spiked with thimerosal in order to mimick the vaccine schedule of the 1990s (given that hepatitis B vaccine no longer containes thimerosal) and the previously reported abstract, in which the entire vaccine schedule of the 1990s was supposedly mimicked, it does make me wonder. Could it be that the results being reported derived from observations made on the same monkeys used to generate the IMFAR results? In other words, could it be that the investigators gave the monkeys the hepatitis B vaccine after birth, tested their various reflexes early in their lives (only for the first 14 days), and then continued with their “simulated” vaccination schedule in order to produce the rest of the observations reported last year? Inquiring minds want to know! After all, the current study only goes out for two weeks; it would be easy to continue the rest of the simulated vaccination schedule after that and then make measurements on the same monkeys.
Indeed, one wonders if, stung by the criticisms of inadequate controls, the investigators added additional controls and kept the same group of 13 monkeys as the “vaccinated group.” Maybe they didn’t, but the similarity between the numbers of monkeys used in the studies described in the IMFAR abstracts last year and the numbers of monkeys used in this study sure do raise an eyebrow, don’t they? So does this part of the methods section:
Animals were allocated to either the vaccinated (exposed) or saline/no injection (unexposed) groups 19 on a semi-random basis in order to complete peer groups for later social testing  such that each 20 peer group contained animals from either the unexposed or exposed study groups. Once a new peer 21 group was started, new animals were assigned to this group until it consisted of 3 or 4 infants, the 22 ages of which were less than 4 weeks apart from their peers.
My first thought was: What’s with this “semi-random basis” stuff? Why not on a random basis? Being a little bit “not random” is like being a little bit pregnant, if you know what I mean. In other words, when investigators start adding nonrandom selection to a protocol, it’s not random anymore. That much should be obvious. And when the selection of animals is no longer random, then that calls the whole study into question. It sounds to me as though Hewitson and Wakefield designed the experiment (or let the experiment unfold) so that all members of a given peer group received the same treatment; i.e., they all got Th-HepB (HepB vaccine with thimerosal added), they all got saline, or they all got no injection. If so, that’s certainly consistent with my speculation that there were some animals added at the end of the experiment. If my interpretation (i.e., that more animals were added later as controls), it strikes me as odd. Why on earth would Hewitson and Wakefield choose that design? Why not include at least one saline or no-injection animal in each peer group? With the apparent design for this experiment, there’s no way to discriminate between possible vaccine-related effects and uncontrolled time-related confounders, given that some monkeys under this design must have been analyzed in a noncontemporaneous fashion.
The questions of why there are two control groups and why the randomization scheme was such that each member of a peer group got the same treatment becomes especially suspicious to me because in their analyses, Hewitson and Wakefield pool the four monkeys receiving the saline control with the three receiving no injection for purposes of calculating means. Could it be that the investigators simply added a few monkeys after the experiment had already been started (or even after the original 16 monkeys had already undergone the entire “vaccination schedule”)? Again, inquiring minds want to know! Could it be that, in order to beef up their apparent statistical power to detect differences in these various reflexes, some additional monkeys had to be added? Or was this done in response to reviewers’ concerns? If that’s the case, then when were these additional monkeys studied? How long after the original group? Mark Blaxill brags that the person who measured the monkeys’ reflexes was trained by an expert until her results had a high concordance with those of experts, but if there were a several month delay between when she measured the first group of monkeys and then the additional controls, it’s not too hard to imagine that she got better and thus more able to detect subtle differences in the reflexes. If conditions under which the monkeys were raised change, then the same sort of time-dependent confounders could be at work here. I’d really like to see when each monkey was born and what the time to criterion was for each monkey. In other words, I’d like to see at least some of the raw data.
The similarities between the designs of the studies described in the IMFAR abstracts last year and this study sure make me wonder if perhaps Hewitson and Wakefield are perhaps “minimizing” the use of animals. Of course, minimizing the use of animals, particularly primates, in research is normally a good thing, but if that’s what they’re doing, why not report the entire study? After all, in the video above, Wakefield admits that this study is part of an ongoing study of the “vaccine schedule.” However, if you go back to look at the IMFAR abstracts that I discussed last year, you’ll see that it was stated that the monkeys were killed between 12-15 months and tissues examined at necropsy. In other words, Hewitson and Wakefield were done with those animals over a year ago! Given that, why not just report the whole study instead of this little piece of it, which must have been done at least two or three years ago? Are they planning on having data from this study come out in little dribs and drabs. In other words, are they planning on publishing several papers, each consisting of what we call an “MPU” or “minimal publishable unit,” derived from part of the same study?
Whatever the case, in this particular MPU, what did Hewitson and Wakefield find? Not much, actually, the triumphant crowing of Mark Blaxill at AoA notwithstanding. Basically, Hewitson and Wakefield reported that three of thirteen infant reflexes were delayed in their appearance. Specifically the root reflex was delayed by one day; the suck reflex by nearly two days, and the snout reflex, also by nearly two days. Because they mixed thimerosal into the hepatitis B vaccine and didn’t have a control group with thimerosal-free hepatitis B vaccine, Hewitson and Wakefield couldn’t even hazard a guess whether the effects observed, even if significant, were due to the vaccine or the thimerosal or both.
There also appeared to be a confounding factor in that monkeys with lower gestational age (GA). For example, the authors state:
In general,as GA increased animals reached criterion earlier whereas animals of lower GA were relatively delayed. This effect was only significant when exposure was taken into account.
I really have to wonder whether in a larger group of completely unvaccinated monkeys the correlation between the delay in appearance of these reflexes with decreased gestational age would reach significance, no hepatitis B vaccination necessary. The authors try to spin their results as suggesting that lower GA monkeys are more susceptible to whatever effect it is they think they’re seeing due to Th-HepB, but their arguments are not very convincing–about as convincing as their data, actually, as in not very. In addition, given such small numbers, I always wonder about the validity of carrying out any sort of multivariate analysis. Another point to consider, in this paper these reflexes are ranked from 0 (absent) to 3 (the highest possible score). The time to criterion was defined as the time to reach the highest possible score. Again, given the small numbers and the correlation between gestational age and reaching these milestones, I really have to question whether the results in this study, despite being apparently statistically significant if you pool the two control groups, are really behaviorally or biologically significant. If there’s one thing my mentors always taught me, it’s that statistically significant doesn’t necessarily mean significant. This is particularly true, since it is not reported whether these delays are prolonged or whether the baby monkeys recover. Finally, there is the question of whether the authors bothered to correct for multiple comparisons. Whenever a large number of comparisons are made, by random chance alone the odds of seeing a “positive” or a correlation between a marker and what is being studied increases. The problem is that, the larger the number of comparisons, the larger the chance that any “hit” observed is a false positive. In the case of multiple comparisons, a statistical adjustment needs to be made to correct for the effect of multiple comparisons. In other words, if the authors didn’t correct for multiple comparisons, it’s quite possible–likely, even–that their observed “positives” are in fact false positives.
Personally, I’m not all that impressed. One reason is that, even if the study shows what the authors claim it shows, so what? Wakefield and Hewitson haven’t shown evidence of long-lasting neurological impact, and they certainly haven’t shown any evidence that the hepatitis B vaccine causes autism, even though you know that’s the subtext of what they are arguing. Moreover, the numbers are really small. I look at monkey studies in much the same way that I look at clinical trials. If a study is worth doing prospectively, it’s worth doing with enough subjects at the outset to provide sufficient power to guarantee that there is a high likelihood that the question being asked will be answered. If an investigator can’t provide enough subjects, then he shouldn’t do the study
Another reason I have a problem with this study is that no statistical justification is given for pooling the no vaccine group with the saline placebo group. Whenever I see pooling of groups like this, I become very suspicious of a post hoc combining of data, which is always dicey. Indeed, a good rule of thumb is that it’s usually at least a little bit questionable to combine groups like this for purposes of statistical analysis unless this pooling was part of the study design from the beginning, in which case it is still somewhat dicey but not as bad. Presumably the reason why two control groups were used was to determine if simply the pain of giving a saline injection may have had any affects on the time to criterion for these neurodevelopmental parameters. That’s a scientifically legitimate reason to have two control groups (although one certainly does wonder why they didn’t have a control group receiving thimerosal-free hepatitis B vaccine). But, again, it really makes me wonder whether the investigators pooled the data, apparently post hoc. The only reason to do such a post hoc pooling is to convert three groups to two groups and to add statistical power to the control group. You can be quite confident that, had there been a statistically significant difference between the “vaccinated” group and the saline placebo group and between the “vaccinated” group and the uninjected group, Hewitson and Wakefield would not have pooled the data. In fact, I almost guarantee it. After all, why do something that will lead to scientists questioning the validity of your study’s statistical analysis if you don’t have to? In the case that there were in fact statistically significant differences between each of the control groups compared to the “vaccinated” group, you can be quite certain that the results would have been reported–shall we say?–unmassaged by the pooling of the two control groups.
It’s also rather instructive to look at the original IMFAR abstract, which reported:
Kaplan-Meier survival analyses revealed significant differences between exposed and unexposed animals, with delayed acquisition of root, suck, clasp hand, and clasp foot reflexes. Interaction models examined possible relationships between time-to-acquisition of reflexes, exposure, [3C]DPN binding, and volume. Statistically significant interactions between exposure and time-to-acquisition of reflex on overall levels of binding at T1 and T2 were observed for all 18 reflexes. For all but one (snout), this involved a mean increase in time-to-acquisition of the reflex for exposed animals.
It’s interesting to note that they looked at 18 reflexes then but only reported 13 now. Why did they drop five between then and now? There were more “significant” differences in time to criterion in the “old” study described in the IMFAR abstract, and only two of the reflexes appeared to be consistent between the two studies. Again, I have to ask: Is the experiment reported in this paper a true repeat of the studies in the IMFAR abstracts, or is it simply an “extended” version of the prior study? I think you know which one I suspect. In fact, Wakefield all but admitted it in the interview in the video above.
I also think that this study was a horrible waste of primates, and I can’t believe the University of Pittsburgh’s IACUC was thinking when it approved this study. Maybe it’s because, as Mark Blaxill was so happy to inform us, the University of Pittsburgh primate facility is relatively new, and Pitt’s IACUC was not experienced in evaluating primate protocols at the time these experiments were being proposed.
Finally, on a different note, I wonder about the ostensible justification for this study:
Since Th[imerosal]-containing vaccines, including the neonatal HB vaccine, continue to be used routinely in developing countries , continued safety testing is important, particularly for premature and low-birth-weight neonates.
If the authors are so concerned with vaccine reactions and autism in developing countries, then why on earth did they try to mimic the U.S. vaccination schedule? Why did they use monovalent hepatitis B vaccine, when few countries other than the U.S. do? Most developing countries use a tetravalent, pentavalent, or hexavalent vaccine containing multiple other antigens, such as diphtheria, tetanus, pertussis, IPV, Hib, and HepB antigens. The hepatitis B vaccine, if given at all, usually isn’t given until at least six weeks of age as part of existing vaccine programs. So, when you come right down to it, this study isn’t even studying what it claims to be looking at or following the rationale that its authors claim as the reason for the study! If it were, it would not be following the U.S. vaccination schedule. In reality, it looks very much as though this study is custom-designed to sow doubt and fear about the birth dose of hepatitis B vaccine in the United States. That, and it’s almost certainly going to be used as ammunition for legal action and lawsuits. Just wait.
It also may be another objective here. I note that anti-vaccine groups like TACA funded this study, which certainly cost at least $100,000 to do, most likely considerably more than that. Anti-vaccine groups would not have invested so much money if they didn’t expect a payoff. Here’s what I think might be going on. Like all good denialists, anti-vaccine groups and their toady scientists (like Wakefield) want material to sow doubt about the science they deny, in this case, the safety and efficacy of childhood vaccines. Small preliminary studies in general have a fairly high likelihood of producing false “positive” results (i.e., showing a correlation where a larger, better designed study would find none); so funding such studies is likely to produce at least some apparent “hits,” such as this this study by Hewitson and Wakefield. Because such studies are small and preliminary, they can’t really settle anything, and the anti-vaccine movement knows it. So anti-vaccine groups like TACA and Generation Rescue will use the results of these small studies as justifications for claiming that there is doubt over whether vaccines are safe and, most importantly, that more money is needed to do more and bigger studies. They’ll then get such larger studies funded through the NIH or through the efforts of anti-vaccine sympathizers like Representative Dan Burton. In the meantime, they’ll point to the very existence of such NIH-funded studies as further “evidence” that there is still a scientific controversy over whether vaccines cause autism and milk them for all they’re worth until the larger studies come back negative, as they almost always do.
It’s a very hard strategy to counter, and, unfortunately, it just might work.