Articles

Terrible Anti-Vaccine Study, Terrible Reporting

One of my goals in writing for this blog is to educate the general public about how to evaluate a scientific study, specifically medical studies. New studies are being reported in the press all the time, and the analysis provided by your average journalist leaves much to be desired. Generally, they fail to put the study into context, often get the bottom line incorrect, and then some headline writer puts a sensationalistic bow on top.

In addition to mediocre science journalism we also face dedicated ideological groups who go out of their way to spin, distort, and mutilate the scientific literature all in one direction. The anti-vaccine community is a shining example of this – they can dismiss any study whose conclusions they do not like, while promoting any horrible worthless study as long as it casts suspicion on vaccines.

Yesterday on Age of Autism (the propaganda blog for Generation Rescue) Mark Blaxill gave us another example of this, presenting a terrible pilot study as if we could draw any conclusions from it. The study is yet another publication apparently squeezed out of the same data set that Laura Hewitson has been milking for several years now - a study involving macaque infants and vaccinations. In this study Hewitson claims a significant difference in brain maturation between vaccinated and unvaccinated macaque infants, by MRI and PET analysis. Blaxill presents the study without noting any of its crippling limitations, and the commenters predictably gush.

The first (and really only) thing you need to know about this study is that it involves 9 vaccinated monkeys and 2 controls. That’s right – just 2 controls. The fact that Hewitson bothers to do statistical analysis on such a small set of subjects is laughable. Let’s keep in mind that most pilot studies turn out to be wrong – they are called pilot studies because they are intended to point the way to further research, not as a basis for any conclusions. Serious researches recognize that pilot studies are shots in the dark – and that counts even for good pilot studies, which this is not.

If the outcome were something hard and dramatic – like survival vs death, then 2 subjects would be a reasonable pilot study. But in this case Hewitson is doing a somewhat tricky measurement of brain volume changes over time and binding of opioid ligands in the amygdala. It is also worth noting that there were originally 4 controls, but one was eliminated due to improper protocol. We never learn what happened to the third monkey, we are just told there is data on two controls. This kind of missing data, especially when the overall numbers are so pathetically low, is very concerning.

She is also making multiple analyses (another red flag by itself), which means she can compare multiple variables looking for any difference. Then she invokes the sharpshooter fallacy and declares any change she does find to be clinically meaningful. So while there is no difference in brain volume or amygdala volume between exposed and unexposed monkeys, she finds differences in the change over time. We don’t know if still other variables were looked at and not reported – this is another weakness of pilot studies and why follow up studies replicating the specific effects reported are necessary before any conclusions can be drawn.

As further evidence of looking for any difference then declaring that the outcome of interest, we can look back to Hewitson’s 2008 reporting of her monkey data, in which she wrote:

“Compared with unexposed animals, exposed animals showed attenuation of amygdala growth and differences in the amygdala binding of [11C]diprenorphine.”

But in the current study she finds increased amygdala growth in exposed monkeys:

Not surprisingly, given the different maturational trajectories in exposed vs. unexposed animals, (unexposed decreasing and exposed increasing) there was a statistically significant interaction between exposure and time on total amygdala volume (Wald χ2=10.93; P=0.001). However, there were no significant main effects on total amygdala volume of either exposure (Wald χ2=0.75; P=0.39) or time (Wald χ2=1.14; P=0.29).

So which is it? Reading the results of the current study, especially in light of previous publications, gives the overall impression of a random scatter of data with incredible cherry picking in order to make the argument that there are any meaningful results at all.

Taken by itself, this is a worthless study. The numbers of subjects is too small to do any meaningful analysis. The results are all over the place, and not even consistent with prior publications by the same authors. The analysis is also far-fetched. Hewitson argues that both thimerosal-containing vaccine and MMR (which does not contain thimerosal) contribute to the alleged brain changes she is reporting. While the word “autism” does not appear in her report, Blaxill is concluding in his reporting that these brain changes are the same as those found in autism (an absurd conclusion given how non-specific these changes are, even if real, which cannot be concluded from this study). The anti-vaccine agenda is now clear – they get to have their cake and eat it too. They can now argue that an interaction between thimerosal and MMR cause or contribute to autism, through completely independent mechanisms, apparently.

To put this study further into context, this research is being conducted by the Thoughtful House Center for Children – Andrew Wakefield’s home after he was essentially kicked out of the UK and subsequently struck off. Wakefield’s name, however, does not appear anywhere on the current study, although he was listed as final author on previous publications from the same research. Apparently his name has become too toxic for the Thoughtful House.

The current study also appears in a obscure journal, Acta Neurobiologiae Experimentalis – which dedicated an entire issue to publishing dubious research on autism. The same issue includes two articles by the father and son Geier team – other vaccine and autism researchers who are off in their own world and whose research cannot be replicated.

Conclusion

This current study, as well as the entire macaque research program by Hewitson, is a good example of terrible research. The subject numbers are far too small for any meaningful statistics, and the outcomes being followed are numerous and tricky with a random scatter of results not even consistent between different publications of the same research.

What we have is far worse than ideological reporting and spinning of the scientific research – apparently we have the ideological conduction of research in the first place. This is similar to the research program of Benveniste on homeopathy.

In general it is a good rule to be suspicious of research that seems to be unique to one researcher or research team and is out of step with the broader research community. Unfortunately, such research contaminates the literature and is easily exploited to confuse the media and the public who often do not distinguish crank research from legitimate science.

___________________

Crossposted on NeuroLogicaBlog

Others reporting on this study:

Respectful Insolence – Orac also points out that Hewitson failed to disclose her COI – that she has a child with autism who is part of the Autism Omnibus suit.

Ibrb – Author, Sullivan, also points out that amygdala size should increase in macaques, so it is especially odd that the non-exposed monkeys’ amygdalas shrank. That makes no sense, and is likely due to the quirkiness of having only two controls. So the authors conclusions are entirely based upon a weird result in their tiny control group – i.e. this is completely bunk science.

Posted in: Science and Medicine, Vaccines

Leave a Comment (25) ↓

25 thoughts on “Terrible Anti-Vaccine Study, Terrible Reporting

  1. Matt says:

    This really bothers me. These data weren’t worth taking the time to write up, much less publish.

    Missing from the paper is any mention of Hewitson’s conflict of interest. She has a claim before the vaccine court. Perhaps the journal doesn’t require her to note this COI.

    But it is imperative that someone with such a major COI perform excellent research. Nonsense such as this only feeds the impression that this team is willing to stretch results to fit a predetermined conclusion.

  2. sheldon101 says:

    The ethical failure of the journal.
    ———

    What strikes me is that journal that published the paper failed ethically.

    Quite clearly, Wakefield was involved in this paper as much as he was involved in the 2009 Neurotoxicolgy paper that was withdrawn because of his involvement.

    So he needed to be named as an author on this paper. The journal only had two ethical choices. One, not publish the paper because of Wakefield’s involvement as an author.

    Two, publish the paper with Wakefield as a named author. The only reasonable reason for not doing so is that they knew that it would be wrong to publish a paper with Wakefield named as an author, especially after the GMC finding of January 2010.

    Instead, they played it sneaky and left his name off.

    Shame.

    BTW, the journal states
    “Acta Neurobiologiae Experimentalis is a fully peer-reviewed quarterly with an international board of editors that publishes manuscripts submitted by authors from all over the world. It is indexed by the Institute of Scientific Information, Philadelphia USA (ISI 2009 impact factor 1.337 and Citation Halflife 6.7 years). ANE is subscribed by Libraries and individuals from over 20 countries. Its circulation varies from 300 (regular issues) to 950 (conference issues).”

    How good (or bad) is a 1.337 rating?

  3. Scott says:

    @sheldon:

    It’s clear to US that Wakefield was heavily involved. It may not have been clear to the journal (beyond the “special thanks” he got); not everybody’s familiar with the personalities involved.

    The entire issue being “Antivax’s Greatest Hits” makes this less likely, but there’s a dearth of hard evidence to support your charge.

  4. Todd W. says:

    @sheldon101

    The journal’s Instructions for Authors (PDF link) does not have anything explicit about conflicts of interest. The closest they come is the following:

    A statement describing the source of financial support may then be indicated, if appropriate.

    The journal apparently makes no effort to require authors to disclose conflicts of a nature like Hewitson’s, even though such an interest plays very strongly into assessing the likely validity of the study results.

  5. Todd W. says:

    Whoops. That should’ve been @Matt.

    As to Wakefield, has anyone thought to perhaps forward to ANE the prior monkey studies that have Wakefield’s name attached and point out that he was, quite likely, heavily involved in the study and should have been listed as an author?

  6. BKsea says:

    Scott: “not everybody’s familiar with the personalities involved.”

    This was published in a special issue on autism, presumably by a guest editor who specializes in autism research. I find it hard to believe such a person would be unfamiliar with Wakefield and the implications of publishing this study.

    Also, in my experience, papers for special issues are often reviewed by others submitting to the same special issue. Perhaps the Geiers or some of the other “esteemed” authors passed this for publication.

    I feel sorry for the publishers of this journal as whatever reputation it had will likely go the same direction as Medical Hypotheses.

  7. Matt says:

    BKsea,

    according to the journal “The idea for this topic came from Professor Dorota Majewska”, who apparently is a signatory to the “We support Andy Wakefield” online petition.

    Her work focuses on autism–
    http://www.ipin.edu.pl/autism/index.html

    She hosted a conference on vaccines and autism in 2008,
    http://www.ipin.edu.pl/autism/page08.html

    The Geiers and DeSoto were presenters. They are also authors in this special issue of the journal.

    She holds the EU Maria Curie Chair. Sorry if I feel the esteemed Madam Curie’s name is not well applied.

  8. aeauooo says:

    I found myself snickering as I read your article, broke out in a belly-laugh when I read, “this research is being conducted by the Thoughtful House Center for Children,” and was momentarily stunned when I read, “The same issue includes two articles by the father and son Geier team – other vaccine and autism researchers who are off in their own world and whose research cannot be replicated.”

    I’ve been putting together a PowerPoint presentation on the 2009 H1N1 pandemic and realized that I had a hardcopy of a study on my desk by M.R. & D.A. Geier (2003) on influenza vaccination and Guillain-Barré syndrome.

    I had already decided not to use the Geier et al. study in my presentation, not because I didn’t like the results (I hadn’t read it), but because I had already cited a number of more recent studies.

    The authors of the IOM report, “Immunization safety review: influenza vaccines and neurological complications” wrote, “Using VAERS data from 1990-1999, Geier and colleagues (2003) compared the occurrence of GBS in those who received the influenza vaccines and those who received the tetanus-diphtheria adult vaccine (Td). Numerous flaws in the study methods (unknown if vaccination status was validated, if participants received both vaccines, and if GBS was diagnosed by a neurologist) and limitations in VAERS data affect the validity and precision of the risk estimates calculated and the ability of the study’s findings to contribute to causality” (I hadn’t read the IOM report in its entirety either).

  9. Jann Bellamy says:

    The article states: “Special thanks to Dr. Andrew
    Wakefield for assistance with study design and for critical
    review of this manuscript.”

    Some questions:
    If one “assists with study design” shouldn’t that make him an author automatically? Is there a choice in journal article writing as to whether one does or does not claim author status, no matter what the level of participation?
    Do OB/GYNs often do research in neurology? Isn’t there some level of competency in the field under study required to get your hands on monkeys for research?

    I think I figured out what happened to the third monkey. It’s mother figured out these guys were idiots and pulled her baby from the study.

  10. sheldon101 says:

    I sent an email to the contact address for the journal.

    It wasn’t very nice. But it does give them an opportunity to give a response regarding Wakefield’s role.

    It isn’t hidden.

    “Acknowledgments
    We thank Drs. Saverio Capuano and Mario Rodriguez
    for veterinary assistance; and Dr. David Atwood, Carrie
    Redinger, Dave McFarland, Amanda Dettmer, Steven
    Kendro, Nicole DeBlasio, Melanie O’Malley and Megan
    Rufle for technical support. Special thanks to Dr. Andrew
    Wakefield for assistance with study design and for critical
    review of this manuscript; and to Troy and Charlie Ball
    and Robert Sawyer.”

  11. Matt says:

    “a critical review of this manuscript”

    Really?!?

  12. Wholly Father says:

    Some thoughts after perusing the manuscript.

    They used some mighty fancy statistics for an embarrassingly underpowered study. (If you can’t dazzle them with brilliance, dazzle them with statistics.) This really smells of post-hoc data dredging and anomaly hunting. They describe in great detail tests for normality of the data. How can one even think out normality when the control group has only 2 subjects and testing was done at only 2 time points? Any statisticians out there that can give an expert review of the methodology used in this paper?

    Some data are conspicuously missing. What was the overall stature (size, weight) of the monkeys at baseline, time 1 and time 2? Ditto for brain imaging studies at baseline. Perhaps all the differences were epiphenomenon due to differences in overall stature and growth rate between the two groups.

    Look at Table IIa. Sometimes T2 was subtracted from T1 and sometimes T1 was subtracted from T2. This is probably just bad proofreading, but it does make one wonder.

  13. Dr Benway says:

    Poor sad monkeys who died for no reason.

    I’m ok with killing critters so long as the species is in no danger and some higher purpose will be served.

    Oh this reminds me of another issue I never hear discussed (sorry for the OT):

    What’s up with catch-and-release fishing? If you’re not gonna eat the fish, what are you doing? We know fish feel pain –they will self-administer opiates when injured. So how is torturing them for no reason any sort of respectable sport?

    Eat the damn fish and be grateful for the protein.

    Sometimes I throw a few wheat puffs to the hand-sized bluegills at the end of our dock. So now whenever I’m down there, a crowd of about 20 googly-eyed faces collect near my feet. It’s pretty funny, actually.

    If I shift my position so I’m looking toward the other side of the dock, the crowd migrates around. They do seem to have a primitive theory of mind.

    Fish are cool.

    Monkeys are even more amazing.

    :(

  14. TsuDhoNimh says:

    Did they bungle and kill the third control monkey?

    And how many more publications can they dredge out of this pathetic data set?

  15. Draal says:

    @sheldon101
    An impact factor of 1.377 is pitiful. It means no one really cites it. I’d have to look up the formula but it’s like the number of cititations of that journal per year divided by the number of articles the journal published per year. From personal experience, an impact factor of 5 is a fairly competitive journal. Science and Nature have impact factors >25.

    The acknowledgments is a place to thank those that helped in some meaningful way but not enough to contribute to the actual research to warrant an authorship. A coffee break with a collegue where ideas are exchanged does not equal a name on a paper. Should a manufacturer’s instruction manual be named if a particular assay was performed according to the instructions? It’s up to the discresion of the principle investigator who gets their name on the paper and the order in which they appear. The first name is the person who contributed the most work.

  16. Draal says:

    I think it’s time a post was written that demystifies the “peer-reviewed” process. First and foremost, it’s a human endeavor; COIs, personality conflicts and luck are just some of the contribution factors to the whole process. For example, a researcher submits an article to a journal. One of the editors reads it and decides if it warrants being peer-reviewed. The editor has the authority to reject the paper outright usually on the basis that the paper’s subject matter does not match the subject focus of the journal. But perhaps the researcher pissed off a guy who the editor knows, then the editor has an opportunity for retribution. I know this from first hand experience. Or perhaps the editor is buddies with the researcher and the article is accepted for peer review based on that. The researcher then has the opportunity to suggest who will be the peer reviewers. He can even request that certain people NOT review the paper; research is competitive and a competitor will certainly scour the paper for any flaws to justify rejecting the paper. Even after the peer-review process, the editor still has discretion of whether an article is printed or not. The peer review process is suppose to guide the editor’s decision.

    Journals do not just pop out of thin air. They are started by a guy or group off guys that feel that an area of study is unrepresented. They have a lot of control of what gets published and what doesn’t. If the founding editor has a student that is now a researcher at so-and-so university, there’s definitely an opportunity for favoritism to publish the former student’s research. If blatantly bad articles repeatedly appears in a journal, it speaks volumes about the standards and motivations of the journal.

  17. research says:

    @ draal

    A researcher can request who will peer review his/her submission? I was unaware of that.

    I am new to research (a grad student) and what I have witnessed thus far is that reviewers are chosen by the journal (because they are experts in that area), but if your research contradicts the reviewer’s own findings, that is motivation for them to find reason to reject your submission. Peer reviewed, imo is extremely biased.

  18. hat_eater says:

    As to the person who proposed the topic of the issue, prof. Majewska gained some notoriety here in Poland in 2008 when she returned from the USA after spending there over 20 years, but since the initial splash she seems to be dumped by the media. Amazingly, the journalists apparently saw through her titles.
    Oh, and read about the MC Chair here:
    http://ec.europa.eu/research/fp6/mariecurie-actions/pdf/faq_exc_en.pdf
    It seems once a uni secures the contract, they can hire anyone, as long as she can dazzle the comittee with her publication list.
    “Q: Does the Commission carry out pre-checks, on a one-to-one basis, regarding a candidates’ level of excellence or eligibility in general?
    A: No, it is up to a candidate to present her/his research experience in a way that will satisfy the external evaluators that he/she complies with the minimum requirements.”

  19. Draal on peer review: “If blatantly bad articles repeatedly appears in a journal, it speaks volumes about the standards and motivations of the journal.”

    Yes, which is one of the reasons some journals have more prestige than others.

    What is your new and improved method that will enable journals to maintain consistently high standards without the input of editors or scientists?

  20. Draal says:

    “A researcher can request who will peer review his/her submission? I was unaware of that.”

    Because there are so many research fields and subfields, and sub-subfields, editors may not know who is at the forefront on every topic. The researcher submitting the article should know who’s who in their research. Therefore, by proving a list of potential peer-reviewers, it helps the editor out. It’s up to the editor to vet the potential peer reviewers though but the editor has no obligation to choose any of the suggested peer-reviewers.

    “What is your new and improved method that will enable journals to maintain consistently high standards without the input of editors or scientists?”

    My point was that the peer-review process is not a perfect endeavor; I was getting the impression that on this blog, some commentors do not fully understand the process and possibly view it with rose-tinted glasses. I agree with Dr. Novella’s take on the peer-review process as expressed on an SGU post cast: it’s not perfect but, on a whole, it works and self-corrects.

    All is not created equal and there are great journals and there are bad journals. Researchers generally strive to publish in well-respected journals. In part, future funding is dependent on the quality of prior publications by a researcher. The journal’s Impact Factor is one such metric to which a journal’s rating is measured.

    I disagree with daedeus’ view that the I.F. should not be used and individual reviewers should not rely on IF to assess the quality of the researcher’s prior publications; that the reviewer should assess the quality of individual articles for himself. In an ideal world, sure, sounds great. But in practice, it’s not practical. Reviewers are in charge of reading dozens upon dozens of of grant proposals for any given grant review session. A reviewer can attend multiple review panels a year. They have their own research to deal with and have their own grants to write. It’s just not practical to read every article published by a researcher for every grant application. The IF gives a quick and dirty way to assess the quality of the researcher’s prior work. The reviewer can still look up individual articles but don’t expect him to read everything. Plus, it’s only just one criteria that a grant proposal is graded on. Also be aware that the reviewer is only minimally compensated for his attendance at a review session; usually, travel expenses, lodging and ~$50/day.

  21. Draal says:

    @ research
    “but if your research contradicts the reviewer’s own findings, that is motivation for them to find reason to reject your submission. Peer reviewed, imo is extremely biased.”

    Happens all the time. That’s why you can request that certain individuals are not chosen to review your proposal. Generally, the journal editor will respect the researcher’s request.

  22. Draal says:

    ” “a critical review of this manuscript”

    Really?!?”

    I interpret “a critical review” as someone who reads over an article slated for submission and proof edits it and the such. It’s a real good idea to have someone else proof edit it to catch errors, suggest improvements, ect. It’s common practice to acknowledge those who help with the proof reading. I’m not saying Andrew Wakefield did a good job at critically reviewing but I don’t think it’s warranted to automatically assume that his involvement is more than as a proof reader. In context, it’s a possibility he had more of a role but I wouldn’t say that was exactly what happened without further evidence.

  23. wannabeer says:

    Consider me a member of the general populace who feels better educated from reading this blog. I stumbled across it when doing a little background research on a product being shilled by someone to whom my company is leasing space. Duly schooled, I’ve since become sort of addicted to this site and much more skeptical of media reporting relating to health and medical matters. Hell, I’m more skeptical of everything, walking around the house demanding proof of this and that. Well, not really.

    (Incidentally, I have casually met Dr. Offitt and had just read his book “Autism’s False Prophets” before finding this site. I was thoroughly impressed with that work, and thought it a happy coincidence when it turned out this site similarly skewered the quacks).

    Anyway, good on you, keep it up and thanks.

  24. ama says:

    Hi, Steven, hi, folks,

    this is to inform you about a new approach to bring together people across national and language barriers.

    A new Net-wide Blog Archive, the “Antares Ring”, is started at

    http://kinder-und-jugendaerzte.at

    The reason is that blogs like Science-based Medicine and sites like
    kwakzalverij.nl are too isolated. The RSS news only flash up for a short time, then vanish.

    The idea of the Antares Ring is TO KEEP some abstract lines or the beginning of articles in a separate blog, which does only this. So, when someone is looking for something new, he can see the incoming news like with a ticker – but in contrast to a ticker, old entries do not vanish.

    Posting is done by email. The details you will get from Aribert Deckers ( http://www.Journalist.is) ,the person behind the Antares Ring.

    Regards,
    ama

  25. joseph449008 says:

    So while there is no difference in brain volume or amygdala volume between exposed and unexposed monkeys, she finds differences in the change over time.

    My understanding was there was no statistically significant difference in change over time either.

    The trend was significant for one group, and not the other. This, however, doesn’t mean that there was a statistically significant difference between the group trends.

    In other words, you can have two time series, one with a significant trend, and the other without, and still have no significant difference between the trends. An example would be trivial to produce.

Comments are closed.