Articles

More On Fourteen Studies

Recently my co-blogger David Gorski wrote an excellent analysis of the latest propaganda effort from the anti-vaccine crowd – a website that attempts to deconstruct the fourteen studies most often cited to argue for a lack of association between vaccines and autism. As David pointed out, there are many more than 14 studies which demonstrate this, and no credible studies showing that there is any correlation. David covered some of the 14 discussed studies, and today I will discuss one more.

On that anti-vaccine propaganda site J.B. Handley begins his introduction with this logical fallacy:

Of all the remarkable frauds that will one day surround the autism epidemic, perhaps one of the most galling is the simple statement that the “science has spoken” and “vaccines don’t cause autism.” Anytime a public health official or other talking head states this, you can be assured that one of two things is true: they have never read the studies they are talking about, or they are lying through their teeth.

Of course this is  a false dichotomy, or forced choice.  I personally know of many people, including myself and David, who have both read all the studies and are telling the truth about our opinions that they do not support a link between autism and vaccines. It seems to be inconceivable to Mr. Handley that an informed professional could honestly disagree with his opinions – such is the nature of fanaticism.

It is also remarkable that Handley himself quotes many professional, expert, and advisory bodies who also have read the studies and concluded that they overwhelmingly support the conclusion of a lack of correlation between vaccines and autism – including the Centers for Disease Control, the American Academy of Pediatrics, The American Medical Association, the Institute of Medicine, and the March of Dimes. Handley casually and self-servingly assumes that all of the professionals in these organizations are incompetent or they are lying.

And keep in mind what it would mean to lie on this issue – Handley believes that many doctors who have chosen the career path of public health are deliberately condemning millions of children to autism simply to avoid admitting past error, because they cannot face the horrible truth, or to receive their Big Pharma kickbacks. It’s no wonder their rhetoric often becomes hysterical – they really believe this is going on. For some reason it is easier for them to believe this astounding horrible claim than even consider the possibility that perhaps they have misinterpreted the science and that trained experts who have dedicated their lives to understanding the science may know better. This is what we call the “arrogance of ignorance.”

I wish to add that there are also many scientist and physician bloggers who have also taken the time to analyze the data and agree with the consensus opinion of no link. We have no dog in this hunt. David and I, for example, do not prescribe vaccines in our practice, we do not work for pharmaceutical companies, we are not involved in litigation – we have none of the conflicts of interest typically cited to discredit otherwise valid studies or opinions. Our only personal stake in this issue, as science bloggers, is our reputations, which are based upon honest and transparent analysis. We have nothing to gain and everything to lose if we are dishonest or sloppy on this issue.

You also cannot legitimately argue, as many often attempt to, that we are just protecting the status quo or that we are doing this as a favor to our colleagues. We have taken up the task of criticising our colleagues and the status quo whenever we feel it is appropriate. We are in the business of ruffling feathers. Our only stake is in defending something we firmly believe in – science-based medicine.

But the anti-vaccine fanatics simply assume we must be hiding some conflict of interest, or that we are simply incapable of seeing the Truth. That is the paranoid behavior of a cult.

With David’s post for background on the methods used, I will add to his analysis one more of the studies in question.

Madsen 2003 Danish Study

Madsen et al evaluated autism rates in Denmark from 1971 – 2000. From 1961 – 1970 children received 400 micrograms of thimerosal. From 1971-1992 they received 250 micrograms of thimerosal. After 1992 all thimerosal was removed from childhood vaccines in Denmark. The study identified 962 children with autism over this period. They found that from 1970 to 1990 there was no change in the incidence of autism. After 1990 autism rates began to increase, which was attributed to expanding diagnosis and surveillance. These numbers generally match the experience in other Western nations.

The authors conclude that there was no association in their study between thimerosal dose and autism rates.  This is the same as the experience so far in the US – thimerosal was removed by 2002 and yet autism rates continued to rise without a blip.

The “fourteen studies” site gives this study a score of 1 on their rigged scale. There main criticism is that in 1994 outpatient records were used in addition to inpatient records to assess autism incidence. By itself this is a legitimate criticism, but it does no invalidate the study as they suggest. This is a potential weakness of retrospective studies – researchers are somewhat dependent on the consistency of methods used over the years in study. The authors of this study were completely up front about the changing methods over time and the potential impact on their data.

But anti-vaccine critics miss a couple of very important points. First, if thimerosal were a significant contributor to autism rates then we would expect (as with all toxins) a dose response effect. In 1970 the dose of thimerosal in the Danish vaccine schedule was reduced from 400 to 250 micrograms. This did not result in a decrease in autism rates 3-7 years later as one would predict from the thimerosal hypothesis. Autism rates were stable during this time, and there are no concerns about altering methods of diagnosis or counting during this time.

Second – Madsen and his co-authors were well aware of the effects of altering counting methods in their study. Therefore they did the following:

In additional analyses we examined data using inpatients only. This was done to elucidate the contribution of the outpatient registration to the change in incidence. The same trend with an increase in the incidence rates from 1990 until the end of the study period was see.

So they did a reasonable assessment of the effect of adding outpatient to inpatient records on their data by looking at the inpatient data alone, and they found the same trend.  This completely invalidates the criticism of this study by the anti-vaccine crowd, which is premised on the fact that the increasing rates of autism after 1990 were due to the addition of outpatient records.

The bottom line is that this study shows no correlation between changing doses of thimerosal and autism rates. It does reveal an increase in autism rates beginning in the early 1990s resulting from expanded diagnosis and surveillance. It is interesting that the anti-vaccine critics use that very fact to argue that this study is not valid. Yet otherwise they deny that increasing autism rates are due to these factors because it is their claim that the increase in autism rates were due to vaccines. They therefore directly contradict themselves.

In addition to a lack of correlation between thimerosal and autism, this study supports the conclusion that the rise of autism rates in the 1990s and beyond are due to changes in the definition of autism and efforts to make the diagnosis in the population. That is the common element between Denmark and the US. Exposure to thimerosal and the vaccine schedule differed between these two countries, and yet autism rates were similar. Thimerosal and vaccines are not the common element in the rise of autism diagnoses – definition and surveillance are. So this data becomes much more powerful evidence against a link between autism and vaccines when it is considered in the context of US data.

The “fourteen studies” website declares:

Where is the truth? Like everything else in life, the devil is in the details.

It is indeed.

Posted in: Vaccines

Leave a Comment (62) ↓

62 thoughts on “More On Fourteen Studies

  1. Joey says:

    First time commenter. The demolition of antivaccine propaganda by the authors on this blog is invaluable to busy primary care physicians. I reference this particular study almost daily when I am confronted with genuinely concerned but imperfectly informed parents. Although I have read the entire study, it is extremely useful to have evidence-based talking points to counter the flood of misinformation out there.

    In the wiki-editorial spirit of perfecting this helpful post:
    “It’s no wonder their rhetoric often become hysterical…” and “There main criticism is that in 1994 outpatient records were used…”

    Thanks again and keep up the outstanding work.

  2. pec says:

    The increase in autism resulted at least partly from increases in diagnosis in recent decades. Over the same period, the amount of mercury decrease, or it was removed. So this might lead you to think that mercury in vaccines has no relationship whatsoever to autism.

    Or, if you bother to think a little harder, you might realize that nothing can be determined from this correlational data. Because diagnosis rates increased, it’s impossible to know if the real incidence rate of autism increased or decreased over the period.

    So it looks like you have an emotional bias against the anti-vaccine movement, even if you have no financial interest.

    I am NOT saying I think vaccines have caused some cases of autism. I have no idea. My mind is open, not shut, and the studies you mentioned do not settle the controversy.

    I am a scientific skeptic, not a political advocate for the medical industry (or any other industry).

  3. pec – you didn’t address any of the points I made in the post. Try actually reading it.

  4. SF Mom and Scientist says:

    It is incredible how far these people will go to “prove” that vaccines cause autism. I was talking to someone who was saying that thimerosal caused autism. When I explained that thimerosal had not been in childhood vaccines for several years, and there was no drop in autism rates, he said that doctors were now giving pregnant women vaccines with thimerosal so that the autism rate could continue to rise, so they could cover up the fact that thimerosal causes autism. Unbelievable.

  5. trrll says:

    Or, if you bother to think a little harder, you might realize that nothing can be determined from this correlational data. Because diagnosis rates increased, it’s impossible to know if the real incidence rate of autism increased or decreased over the period.

    I’m curious about your hypothesis, pec. It seems that even if there were two causes for the apparent increase in autism incidence–mercury and increased diagnosis, reduction of one of these influences (mercury exposure) should result in some sort of change in the trend of autism incidence. Why would this not be the case? Are you proposing that diagnosis of autism coincidentally increased just at the same time as thimerosal was reduced, and by just enough to mask the decrease in real incidence of autism? Is there any evidence that diagnosis standards for autism showed an abrupt shift around this time?

    And how about the similar studies in Denmark and California that also found no impact of thimerosal reduction on autism rate? Were those decreases also masked by increases in autism diagnosis–at different times, in different countries? Thus this genuinely seem plausible to you?

  6. pec says:

    ” if there were two causes for the apparent increase in autism incidence–mercury and increased diagnosis, reduction of one of these influences (mercury exposure) should result in some sort of change in the trend of autism incidence.”

    You would have to know how much of the increase resulted from increased diagnosis, and then subtract that to determine if the autism rate actually stayed the same when mercury was reduced and then removed.

    But you can’t know how much of the increase resulted from increased diagnosis, so the comparison is worthless.

    The blog author’s point was that autism rates did not decrease when mercury was removed, so, he says, mercury must not have contributed to autism.

    But maybe the reason autism rates did not decrease was that the rate of diagnosis increased, canceling out a possible decrease due to the removal of mercury.

  7. joseph449008 says:

    pec: “Because diagnosis rates increased, it’s impossible to know if the real incidence rate of autism increased or decreased over the period.”

    Even though the diagnosis rates are increasing gradually, sudden removal of a major environmental cause should produce a “blip” in the trend, unless it’s too small a factor to be detected. Is that obvious or not?

    This is what the Cal DDS 3-5 caseload series looks like:

    http://www.autismstreet.org/images/blog/2006oct/cdds2006Q3lg.JPG

    The current Cal DDS 3-5 prevalence is about 40 in 10,000. This is not low, if you consider the eligibility requirements of Cal DDS.

  8. pec says:

    “In 1970 the dose of thimerosal in the Danish vaccine schedule was reduced from 400 to 250 micrograms. This did not result in a decrease in autism rates 3-7 years later as one would predict from the thimerosal hypothesis.”

    If the thimerosal hypothesis were true, there would not necessarily be a decrease in autism rates after the dose was reduced. This is because other variables, such as increasing rates of diagnosis or other sources of toxins, have not been ruled out.

  9. No – that wasn’t my point. Read again. They are:

    - The reduction from 400 to 250 micrograms in 1970 did not result in a decrease in autism. Rates were flat during this time, so no apparent confounding trend.

    - The study shows a lack of correlation, which is true, regardless of why you think that is. But to expand on this a bit, autism rates should have dipped 3-7 years after the removal of thimerosal, which (if thimerosal were a major contributor to autism) should have been detected even on the background of rising diagnostic rates. There was no such dip.

    - These data are strongest when taken in context with US data, which shows the same trend at the same time despite significant difference in thimerosal exposure. Meaning – during the 90s autism rates increased similarly in the US and Denmark, despite the fact that thimerosal use was increasing in the US during that time and had recently been stopped in Denmark. Therefore thimerosal is NOT the common element between these two countries, but autism definition and surveillance are.

    The notion that this study is worthless is nonsense, and ignores the specific points I made above.

  10. joseph449008 says:

    pec: “But maybe the reason autism rates did not decrease was that the rate of diagnosis increased, canceling out a possible decrease due to the removal of mercury.”

    For that to have been the case, a really sudden and precise increase in diagnoses coinciding with the removal of thimerosal period (2000-2002) would need to have occurred.

    While unlikely, your hypothesis is testable. I’ll give you a couple hints:

    (1) Whenever there’s an increase in the recognition of autism in California, you will see that the proportion of autistics with mental retardation drops. Is there an unusual drop in the proportion in the relevant time frame?

    (2) If there was a sudden increase in case-finding and recognition of autism in the period, it probably would’ve been observed in all age cohorts, not only for young children.

  11. pec says:

    Inject baby rats with various does of thimerosal and check for neurological damage.

  12. Pec, I don’t mean to be harsh, but you’re actually close-minded not open-minded as you profess.

    Science is based on open-mindedness. Every single writer, contributor, and others (like myself) have repeatedly asked for data that might, even in the slightest manner, support a link between vaccines and autism. What we get is usually appeals to various things like emotion, conspiracies, and, apparently, close-mindedness.

    And as for your baby rat and thimerosal experiment, what’s the hypothesis? What’s the methodology? What will the results tell us? Can neurological damage be assessed in rats in manner that has relevance to humans? I could go on, but those are the basic points.

  13. pec says:

    A rat study would not be easy to design or interpret. But your correlation studies are full of problems. If we are interested in whether thimerosal can cause neurological damage — and it seems to me we are — then it makes sense to do some kind of experimental research. You have to start somewhere. First look for any signs of neurological damage after a relatively large dose, then keep decreasing the dose. Notice what kinds of neurological damage result.

    I’m sure something like this must have already been done, but the vaccines-can’t-ever-hurt-anyone advocates don’t seem to know or care.

  14. Harriet Hall says:

    Animal studies have been done. I discussed them at http://www.sciencebasedmedicine.org/?p=178 They were either negative or showed effects that were inconsistent between studies and not compatible with the symptoms of autism.

  15. Deetee says:

    @ pec:
    I am unaware of any “vaccines-can’t-ever-hurt-anyone” advocates here. We do frequently say there is no evidence vaccines cause autism, however. If you can’t tell the difference between these 2 quite different standpoints, you are more foolish than I thought you were.

  16. pec says:

    ” They were either negative or showed effects that were inconsistent between studies and not compatible with the symptoms of autism.”

    One was negative, another was positive depending on genetics, and the hamster study was positive. No, these three experiments do not prove that thimerosal causes autism in humans, but they certainly are suggestive that it might cause neurological damage in some humans.

    More research should have been done, instead of concluding thimerosal does not cause autism based on ambiguous correlational data.

    Common sense alone should tell us that injecting mercury into infants would not be a good idea. No, the anti-vaccine people have not proven their case and many of them go too far. But calling them irrational unscientific idiots is not justified.

  17. Scott says:

    The problem with common sense is that it’s very commonly wrong. “Common sense alone should tell us…” is not a good argument, particularly when the actual facts tell us the opposite.

  18. pec says:

    [ “Common sense alone should tell us…” is not a good argument, particularly when the actual facts tell us the opposite.]

    We should not ignore common sense, even though it is not always right. In this case, the “facts” are ambiguous and we have some very good reasons to think injecting neurotoxins into infants is unwise.

    Don’t assume common sense is always wrong!

  19. Scott says:

    I don’t assume common sense is always wrong; I simply go with the facts (which are not at all ambiguous here) when they conflict.

  20. Pliny-the-in-Between says:

    To be honest, the anti-vaccination movement has largely been outside of my area of pursuit so I am regrettably not fluent in the argument. Would it be possible for you to point me toward a few review articles that define the specifics?

  21. Brian Egan says:

    Did anyone else get a “WTF?” notion when you looked at the scales of 0-10 (giving a total range of 0-40), then saw NEGATIVE numbers as ratings next to the articles? This is like saying, “On a scale of 1 to 10, I give you a -8!!! OHHHHHH.” Which is cool if you’re trying to diss someone in a rap battle, but seems fairly nonsensical in an attempting-to-be-scientific evaluation.

    I then just looked at their rating broken down for one study, “Neuropsychological Performance 10 Years After Immunization in Infancy With Thimerosal-Containing Vaccines”

    They broke it down like so:

    “Asked the Right Question: -1

    Ability to Generalize: -1

    Conflict of Interest: 0

    Post-Publication Criticism: 0

    Total Score: -2 (negative number for such extreme fraud)”

    How could you get a -1 for “Asked the right question?” If you didn’t ask the right question, the absolutely wrong question, that would be 0, right?

    Ah, but wait! It seems the reason they gave negative numbers wasn’t for asking the wrong question or ability to generalize, but rather for “such extreme fraud.” Apparently the question they asked was not only wrong, but fraudulent (which is a weird definition of fraudulent).

    I dunno, I just found this baffling.

  22. Khym Chanur says:

    When I explained that thimerosal had not been in childhood vaccines for several years, and there was no drop in autism rates, he said that doctors were now giving pregnant women vaccines with thimerosal so that the autism rate could continue to rise, so they could cover up the fact that thimerosal causes autism.

    Was it John Best? I hope it was John Best, since it’s scary to think that more than one person could believe something like that.

  23. daedalus2u says:

    The major problem with looking for neurological damage in rats or mice due to thimerosal injection and attributing similar damage to causing autism is that autism is not associated with any known or observed type of neurological damage.

    A major physical symptom of autism, one that is highly and reproducibly observed is a larger brain with more numerous neurons. How does any type of neurological “damage” cause a larger brain with more numerous neurons?

  24. durvit says:

    If Madsen is mentioned then the ‘NEJM suppression of the Suissa letter’ can not be far behind. Anthony Cox has an account of Epiwonk’s take on the Madsen study and Suissa letter:

    There’s nothing wrong with the Madsen paper that I can see. It’s easy for me to figure out what Suissa did. In Figure 2 of the Madsen paper he divided 263/1,647,504 by 53/482,360 to get an unadjusted relative risk of 1.45 (or autism “45% more likely.”) In other words, a a relative risk unadjusted for the confounding effect of age. Suissa then goes on to argue that it’s “somewhat implausible for the adjusted rate ratio to fall below 1, unless the risk profile by age in the unvaccinated group is vastly different than in the vaccinated (effect-modification).” Well, the reason for adjusting for age in the first place was because the risk distribution of unvaccinated children is much younger than that of vaccinated children — this is confounding, not “effect modification.” The rest of Suissa’s argument has the same problem, except it’s compounded by (1) his misunderstanding that you can’t calculate “rates per 100,000 per year” from the Madsen study — Madsen calculated rates per person time, which is what the Young-Geier study should have done. (2) in the Madsen study the n of autism cases vaccinated >20 months age is just too few to quibble about: 30 out of 316.

  25. Heather says:

    Pediatric Vaccines Influence Primate Behavior, and Amygdala Growth and Opioid Ligand Binding
    Friday, May 16, 2008: 5:30 PM
    Avize-Morangis (Novotel London West)
    L. Hewitson , Obstetrics, Gynecology and Reproductive Sciences, University of Pittsburgh, Pittsburgh, PA
    B. Lopresti , Radiology, University of Pittsburgh, Pittsburgh, PA
    C. Stott , Thoughtful House Center for Children, Austin, TX
    J. Tomko , Pittsburgh Development Center, University of Pittsburgh, Pittsburgh, PA
    L. Houser , Pittsburgh Development Center, University of Pittsburgh, Pittsburgh, PA
    E. Klein , Division of Laboratory Animal Resources, University of Pittsburgh, Pittsburgh, PA
    C. Castro , Obstetrics, Gynecology and Reproductive Sciences, University of Pittsburgh, Pittsburgh, PA
    G. Sackett , Psychology, Washington National Primate Research Center, Seattle, WA
    S. Gupta , Medicine, Pathology & Laboratory Medicine, University of California – Irvine, Irvine, CA
    D. Atwood , Chemistry, University of Kentucky, Lexington, KY
    L. Blue , Chemistry, University of Kentucky, Lexington, KY
    E. R. White , Chemistry, University of Kentucky, Lexington, KY
    A. Wakefield , Thoughtful House Center for Children, Austin, TX

    Background: Macaques are commonly used in pre-clinical vaccine safety testing, but the combined childhood vaccine regimen, rather than individual vaccines, has not been studied. Childhood vaccines are a possible causal factor in autism, and abnormal behaviors and anomalous amygdala growth are potentially inter-related features of this condition.

    Objectives: The objective of this study was to compare early infant cognition and behavior with amygdala size and opioid binding in rhesus macaques receiving the recommended childhood vaccines (1994-1999), the majority of which contained the bactericidal preservative ethylmercurithiosalicylic acid (thimerosal).

    Methods: Macaques were administered the recommended infant vaccines, adjusted for age and thimerosal dose (exposed; N=13), or saline (unexposed; N=3). Primate development, cognition and social behavior were assessed for both vaccinated and unvaccinated infants using standardized tests developed at the Washington National Primate Research Center. Amygdala growth and binding were measured serially by MRI and by the binding of the non-selective opioid antagonist [11C]diprenorphine, measured by PET, respectively, before (T1) and after (T2) the administration of the measles-mumps-rubella vaccine (MMR).

    Results: Compared with unexposed animals, significant neurodevelopmental deficits were evident for exposed animals in survival reflexes, tests of color discrimination and reversal, and learning sets. Differences in behaviors were observed between exposed and unexposed animals and within the exposed group before and after MMR vaccination. Compared with unexposed animals, exposed animals showed attenuation of amygdala growth and differences in the amygdala binding of [11C]diprenorphine. Interaction models identified significant associations between specific aberrant social and non-social behaviors, isotope binding, and vaccine exposure.

    Conclusions: This animal model, which examines for the first time, behavioral, functional, and neuromorphometric consequences of the childhood vaccine regimen, mimics certain neurological abnormalities of autism. The findings raise important safety issues while providing a potential model for examining aspects of causation and disease pathogenesis in acquired disorders of behavior and development.

    Pediatric Vaccines Influence Primate Behavior, and Brain Stem Volume and Opioid Ligand Binding
    Saturday, May 17, 2008
    Champagne Terrace/Bordeaux (Novotel London West)
    A. Wakefield , Thoughtful House Center for Children, Austin, TX
    C. Stott , Thoughtful House Center for Children, Austin, TX
    B. Lopresti , Radiology, University of Pittsburgh, Pittsburgh, PA
    J. Tomko , Pittsburgh Development Center, University of Pittsburgh, Pittsburgh, PA
    L. Houser , Pittsburgh Development Center, University of Pittsburgh, Pittsburgh, PA
    G. Sackett , Psychology, Washington National Primate Research Center, Seattle, WA
    L. Hewitson , Obstetrics, Gynecology and Reproductive Sciences, University of Pittsburgh, Pittsburgh, PA
    Background:
    Abnormal brainstem structure and function have been reported in children with autism. Opioid receptors play key roles in neuro-ontogeny, are present in brainstem nuclei, and may influence aspects of autism. Childhood vaccines are a possible causal factor in autism and while primates are used in pre-clinical vaccine safety testing, the recommended infant regimen (1994-1999) has not been tested.

    Objectives:

    The objective of this study was to compare brain stem volume and opioid binding in rhesus infants receiving the recommended infant vaccine regimen.

    Methods:

    Rhesus macaques were administered vaccines adjusted for age and thimerosal dose (exposed; N=13), or placebo (unexposed; N=3) from birth onwards. Brainstem volume was measured by quantitative MRI, and binding of the non-selective opioid antagonist [11C]diprenorphine (DPN) was measured by PET, at 2 (T1) and 4 (T2) months of age. Neonatal reflexes and sensorimotor responses were measured in standardized tests for 30 days.

    Results:

    Kaplan-Meier survival analyses revealed significant differences between exposed and unexposed animals, with delayed acquisition of root, suck, clasp hand, and clasp foot reflexes. Interaction models examined possible relationships between time-to-acquisition of reflexes, exposure, [3C]DPN binding, and volume. Statistically significant interactions between exposure and time-to–acquisition of reflex on overall levels of binding at T1 and T2 were observed for all 18 reflexes. For all but one (snout), this involved a mean increase in time-to-acquisition of the reflex for exposed animals. In each model there was also a significant interaction between exposure and MRI volume on overall binding.

    Conclusions:

    This animal model examines the neurological consequences of the childhood vaccine regimen. Functional and neuromorphometric brainstem anomalies were evident in vaccinated animals that may be relevant to some aspects of autism. The findings raise important safety issues while providing a potential animal model for examining aspects of causation and disease pathogenesis in acquired neurodevelopmental disorders.

    Microarray Analysis of GI Tissue in a Macaque Model of the Effects of Infant Vaccination
    Saturday, May 17, 2008
    Champagne Terrace/Bordeaux (Novotel London West)
    S. J. Walker , Institute for Regenerative Medicine, Wake Forest University Health Sciences, Winston-Salem, NC
    E. K. Lobenhofer , Cogenics, a Division of Clinical Data
    E. Klein , Division of Laboratory Animal Resources, University of Pittsburgh, Pittsburgh, PA
    A. Wakefield , Thoughtful House Center for Children, Austin, TX
    L. Hewitson , Obstetrics, Gynecology and Reproductive Sciences, University of Pittsburgh, Pittsburgh, PA
    Background: There has been considerable debate regarding the question of an interaction between childhood vaccinations and adverse sequelae in the gastrointestinal tract, immune system, and central nervous system of some recipients. These systems, either singly or in combination, appear to be adversely affected in many ASD children. Although pre-clinical tests of individual vaccines routinely find the risk/benefit ratio to be low, previously there has not been a study to examine the effects of the comprehensive vaccination regime currently in use for infants.
    Objectives: This study was designed to evaluate potential alterations in normal growth and development resulting from the vaccine regimen that was in use from 1994-1999. Specifically, this portion of the study was to compare the gene expression profiles obtained from gastrointestinal tissue from vaccinated and unvaccinated infants.

    Methods: Infant male macaques were vaccinated (or given saline placebo) using the human vaccination schedule. Dosages and times of administration were adjusted for differences between macaques and humans. Biopsy tissue was collected from the animals at three time points: (1) 10 weeks [pre-MMR1], (2) 14 weeks [post-MMR1] and, (3) 12-15 months [at necropsy]. Whole genome microarray analysis was performed on RNA extracted from the GI tissue from 7 vaccinated and 2 unvaccinated animals at each of these 3 time points (27 samples total).

    Results: Histopathological examination revealed that vaccinated animals exhibited progressively severe chronic active inflammation, whereas unexposed animals did not. Gene expression comparisons between the groups (vaccinated versus unvaccinated) revealed only 120 genes differentially expressed (fc >1.5; log ratio p<0.001) at 10 weeks, whereas there were 450 genes differentially expressed at 14 weeks, and 324 differentially expressed genes between the 2 groups at necropsy.

    Conclusions: We have found many significant differences in the GI tissue gene expression profiles between vaccinated and unvaccinated animals. These differences will be presented and discussed.

  26. Prometheus says:

    “pec” states:

    Common sense alone should tell us that injecting mercury into infants would not be a good idea.

    That may be so – perhaps that’s why thimerosal was removed from children’s vaccines over eight years ago. Nevertheless, that doesn’t address the issue of whether mercury – in any form – can cause autism.

    The fact that mercury causes neurological damage – predictable neurological damage – in a sufficient dose, does not mean that it causes autism.

    Despite a number of essays on the matter, mercury poisoning has several features which are not seen in autism, such as tremor. Tremor is such a common and early feature of mercury poisoning that a test for low level mercury exposure uses the presence of tremor as a test.

    Rather than focusing on the fact that this or that specific study of thimerosal or mercury or vaccines haven’t been done yet, how about looking at the data that suggested the vaccine-autism connection in the first place?

    It was the rise in administrative autism diagnoses in the USDE and Cal DDS data that suggested to some people that thimerosal (and, later, vaccines in general) had to be responsible. Well, those data currently suggest otherwise.

    The administrative autism diagnoses keep going up and up, but the thimerosal exposure has gone down to below the 1950′s level. And the number of vaccines isn’t going up nearly as fast as (or in concert with) the administrative autism prevalence.

    Maybe it would be better to stop looking down the dead-end alley of vaccines and start looking at other, more promising possibilities.

    Of course, the real scientists have already done that. It’s only the people who don’t understand that there never was good data suggesting a connection who can’t let go. They’ve fallen in love with the “vaccines cause autism” hypothesis and can’t bear to admit that it’s dead.

    Sorry, folks. I’ve had too many of my own hypotheses die to get all emotional about it. It’s part of science – if you can’t let go of a dead hypothesis, you can’t do science. And if you can’t do science, you ought to get out of the way of people who can.

    Prometheus

  27. pec says:

    Yes Prometheus I agree, they should not stay fixated on the vaccine hypothesis and ignore other possibilities. But I am glad they got rid of thimerosal, and I would not assume it has nothing to do with autism, based on the limited available data.

    “How does any type of neurological “damage” cause a larger brain with more numerous neurons?”

    A larger brain with more neurons is not necessarily a better brain. An important aspect of the maturation process involves the pruning of neurons. We had more neurons at age 2 than any time after, and we weren’t very smart then.

  28. Eric Jackson says:

    Daedalus, there’s an excessive proliferation of neurons in the cerebral cortex, but lower areas, the brainstem, and the amygdala show decreased numbers of cells, and a wide range of other areas show severe disruption of brain architecture.

    These two reviews are rather good, though they’re really at the limits of my ability to understand:

    Pardo CA, Eberhart CG. The neurobiology of autism. Brain Pathol. 2007;17(4):434-47.
    Casanova MF. The neuropathology of autism. Brain Pathol. 2007;17(4):422-33.

    Oh look, JPANDS has shown up again (in reference to the Suissa bit above). And once again failed in basic statistics.

  29. daedalus2u says:

    I agree there are changes in neuroanatomy. Whether those are “damage” depends on what the definition of “damage” is. This is not a trivial issue or me trying to be cute.

    In most of the literature whenever there is a difference from what is “typical”, that is imputed to be “pathological” and hence may be due to “damage”. Savant abilities are deemed to be “pathological” and hence may be due to “damage”, even when those abilities are superior. How “damage” can cause superior abilities has not been suggested, and seems to go against “common sense”.

    If those labels are going to be used, they need to be precisely defined. So far, they have not been precisely defined.

  30. trrll says:

    You would have to know how much of the increase resulted from increased diagnosis, and then subtract that to determine if the autism rate actually stayed the same when mercury was reduced and then removed.

    So your argument here is that any increase in risk due to thimerosal is so small that it is negligible compared to changes in diagnosis standards? Or perhaps even nonexistent? I take it, then, that you agree the increase in autism incidence does not constitute even suggestive evidence that thimerosal plays an appreciable causal role in autism?

    But you can’t know how much of the increase resulted from increased diagnosis, so the comparison is worthless.

    Why does one need to know exactly how much of the increase resulted from increased diagnosis? Given the lack of any effect of decreases in thimerosal on the increased incidence of autism, isn’t it the inescapable conclusion that all, or almost all, of the increase is attributable to increased diagnosis or to some other cause unrelated to thimerosal?

    But maybe the reason autism rates did not decrease was that the rate of diagnosis increased, canceling out a possible decrease due to the removal of mercury

    So let me repeat the question that I asked before: Is it your hypothesis that, purely by coincidence, rates of diagnosis increased at different times in Sweden, Denmark, and California, and that the magnitude and timing of these separate increases in diagnosis was coincidentally just sufficient to mask the reduction due to decreased use of thimerosal?

  31. David Gorski says:

    Heather,

    I’m way ahead of you. I’ve already done a detailed discussion of those two studies (if you can call them that):

    http://www.sciencebasedmedicine.org/?p=100

  32. pec says:

    You tore into those monkey experiments, but found nothing at all wrong with the correlational studies that supposedly rule out any connection between vaccines and autism. It’s always possible that researchers cheated or that their results occurred by chance. You can say that about any research no matter where it’s published — maybe the researchers cheated, maybe their results were due to chance.

    The fact is we don’t know. Maybe the monkey experiments did find something of interest. Or maybe not. But the correlational studies can’t be taken at face value either.

    And it’s always amusing when you complain that small studies have too little power to find a reliable effect, when you’re criticizing a small study that claimed positive results. You don’t realize that a reliable effect with low variance doesn’t need a large N? Anyway, low power increases the chance of type 2 errors, not of type 1 errors.

  33. trrll says:

    You tore into those monkey experiments, but found nothing at all wrong with the correlational studies that supposedly rule out any connection between vaccines and autism. It’s always possible that researchers cheated or that their results occurred by chance. You can say that about any research no matter where it’s published — maybe the researchers cheated, maybe their results were due to chance.

    Yes, this is why it is important for experiments to be repeated. In the population studies of effects of thimerosal on autism incidence, we have three peer-reviewed independent studies, but independent groups of research, carried out on large populations in different countries at different times, all with the same result.

    In the case of the monkey studies, we have three very small studies, all carried out by a single research group, measuring physiological parameters not clearly related to autism, and still, nearly a year after the poster presentation, without a peer-reviewed publication.

    So can you see why any real scientist would regard the population studies as pretty much conclusive, and the monkey studies as preliminary at best?

    And it’s always amusing when you complain that small studies have too little power to find a reliable effect, when you’re criticizing a small study that claimed positive results. You don’t realize that a reliable effect with low variance doesn’t need a large N? Anyway, low power increases the chance of type 2 errors, not of type 1 errors.

    Small sample size increases the likelihood of both type 1 and type 2 errors. I don’t think that any scientist familiar with animal experimentation would be convinced by a study with only 3 animals in the control group. Which may have something to do with why the monkey study has not yet achieved a peer-reviewed publication.

  34. Eric Jackson says:

    A professor I have a great deal of respect for once mentioned that if N is less than 50, it’s not a study, it’s an idea. While the guiding principals of animal research are to use as few animals as possible, that doesn’t mean use -less- than the number necessary. Even when primate research is expensive and legally encumbered, yet they didn’t even examine the data from nearly half their sample animals.

    David Gorski’s account is spot on. My experience and training with animal models is quite minimal, but even I can tell these monkey studies are flat out ridiculous, ignorant of standard practices and just out sloppily done. If this had been done in mice it would’ve been shocking that such poor quality research even got approval. To see it done in primates makes me wonder who approved such research.

    Daedalus:
    “In most of the literature whenever there is a difference from what is “typical”, that is imputed to be “pathological” and hence may be due to “damage”. Savant abilities are deemed to be “pathological” and hence may be due to “damage”, even when those abilities are superior. How “damage” can cause superior abilities has not been suggested, and seems to go against “common sense”.”

    I’m not quite sure I’m following your statement here. Damage to DNA, for instance, could produce a useful mutation, but the overwhelming majority of the time it’s extremely negative – or at least coupled with some large negatives. It’s a counter example from a somewhat different area, but I think it holds up well enough in this case.

  35. David Gorski says:

    It seems to be inconceivable to Mr. Handley that an informed professional could honestly disagree with his opinions – such is the nature of fanaticism.

    The nature of fanaticism is also to attack the person before the argument of people with whom they disagree, which J.B. Handley has done to me on multiple occasions, as have other cranks. He’s made a concerted effort to poison my “Google reputation” and either he, someone from his organization, or someone who read one of his attacks on me has tried to get me in trouble with my bosses a couple of months ago.

  36. trrll says:

    A professor I have a great deal of respect for once mentioned that if N is less than 50, it’s not a study, it’s an idea.

    This strikes me as a bit extreme. There is no single number. It really depends upon the kind of study and the experimental design. I’ve seen quite a bit of research done with groups of 8-12 rats or mice, and the results have held up well. Particularly when within-subjects designs are possible and the effect size is large, a big group may not be necessary. Still, I can’t imagine any situation in which a control group of 3 animals would be considered acceptable.

  37. joseph449008 says:

    I’m not quite sure I’m following your statement here. Damage to DNA, for instance, could produce a useful mutation, but the overwhelming majority of the time it’s extremely negative – or at least coupled with some large negatives. It’s a counter example from a somewhat different area, but I think it holds up well enough in this case.

    The philosophical argument daedalus2u was alluding to is elaborated in some detail in this paper.

  38. isles says:

    Pliny, here are a couple of backgrounders:

    Vaccines and autism: a tale of shifting hypotheses
    http://www.journals.uchicago.edu/doi/pdf/10.1086/596476?cookieSet=1

    A taxonomy of reasoning flaws in the anti-vaccine movement
    http://www.ncbi.nlm.nih.gov/pubmed/17292515

    Others that may be relevant are listed at http://www.immunize.org/journalarticles/comm_talk.asp.

  39. Eric Jackson says:

    Trrl: The n>50 bit was specifically meant for human drug trials. Which I should have mentioned. As you can see, I do my best thinking at midnight after 10 hours of tedious lectures.

    But the basic gist of it is being careful about drawing overwhelming conclusions from very small studies.

    I too have seen animal research done very well with relatively small numbers of rodents. However, in a rodent model you have control of diet, social environment and even a genetically uniform population in most laboratory mice lines. This allows more precision than you would ever get in a human trial. Likewise, rodents are a far better characterized experimental system than humans, and inexpressably better than monkeys for almost everything.

    I just find it extremely distasteful to take such a shoddily designed experiment as this to a primate model. I’ll grant that I’m in a University of California school, and even relatively minor sloppiness with animal experiments has been treated with extreme prejudice.

  40. trrll says:

    The n>50 bit was specifically meant for human drug trials. Which I should have mentioned.

    I agree; for human drug trials, 50 is a very small number.

    I just find it extremely distasteful to take such a shoddily designed experiment as this to a primate model. I’ll grant that I’m in a University of California school, and even relatively minor sloppiness with animal experiments has been treated with extreme prejudice.

    It’s hard to see how they could have gotten it through animal approval with only 3 control animals; it would not have been approved where I am, either. I suppose that one could be charitable and assume that they were planning to add more controls. But even that is poor experimental design, as controls are properly done in parallel. It seems more likely that they were not following an approved protocol (which can get a lab suspended from animal work altogether–perhaps another reason why no peer-reviewed publication has been forthcoming?)

  41. pec says:

    [A professor I have a great deal of respect for once mentioned that if N is less than 50, it’s not a study, it’s an idea.

    This strikes me as a bit extreme. There is no single number. It really depends upon the kind of study and the experimental design.]

    I was going to comment on that before — the professor was probably talking about a specific area of research. Some research requires thousands of subjects and trials, while others are fine with under 20, and it all depends on what you are studying and what effect size and variance you expect.

    And you would normally try out a new idea with a very small number of subjects, as a pilot study. There is no reason to waste time and resources on wrong ideas. So the primate studies described were probably just pilots.

    Predictably, one of the blog authors rejects the conclusions partly because of the small N. Of course, he would have enthusiastically accepted the research, no matter how small the N, if it had failed to show an effect. Even though low power greatly increases the chance of false negatives, NOT false positives!

  42. trrll says:

    And you would normally try out a new idea with a very small number of subjects, as a pilot study. There is no reason to waste time and resources on wrong ideas. So the primate studies described were probably just pilots.

    It is very unlikely that a study that was not balanced in numbers between treated and control would be approved even as a pilot study. This is bad experimental design, and animal care and use committees exist to prevent animals, and especially primates from being subjected to trauma–or even seriously annoyed–for research that is unlikely to advance knowledge. In any case, a “pilot” study is not intended for drawing conclusions–just to get an initial idea of whether it is worth going to the trouble and expense of doing a study substantial enough to yield reliable conclusions.

    Predictably, one of the blog authors rejects the conclusions partly because of the small N. Of course, he would have enthusiastically accepted the research, no matter how small the N, if it had failed to show an effect. Even though low power greatly increases the chance of false negatives, NOT false positives!

    You haven’t presented any examples of this. The population studies cited here involve large numbers, repeated in multiple countries, by multiple groups of investigators.

    And you are quite wrong if you think that small numbers do not increase the risk of false positives. Anybody who actually does science will tell you that this is a common experience. There are good reasons why scientists are not convinced by numbers these small, even when the result supports their hypothesis. I’ve lost track of the number of times I’ve had a “statistically significant” result evaporate when the numbers were increased. Anybody who has done real laboratory work will tell you the same story. The statistical models that are typically used to estimate statistical significance assume normal distributions of error. If the error distribution is not perfectly normal, as is frequently the case (particularly when dealing with something as complicated as living animals), then false positives can be more likely than expected based upon the assumption of a normal distribution. As the size of the sample gets larger, this source of error becomes less severe, and it also becomes possible to evaluate whether the statistical model being used to assess significance is correct.

  43. pec says:

    “The population studies cited here involve large numbers, repeated in multiple countries, by multiple groups of investigators.”

    But, as I explained, it is not possible to draw the intended conclusion from this correlational data. Autism did not decrease when mercury was removed from vaccines, or the amount decreased. But that does not necessarily mean that mercury in vaccines is harmless or does not sometimes cause autism. The rate of autism was increasing at the same time (because of increasing diagnosis, other environmental toxins, or whatever), so any decrease related to removing mercury could have been offset by that increase.

    It doesn’t matter if all the researchers got the same result, when the data is ambiguous and causation cannot be determined.

  44. pec says:

    And, by the way, a discussion at Neurologica about some research by Dean Radin shows how extremely biased and close-minded that blog’s author is. My comments don’t show up there half the time, so I can’t reply to an astounding assertion he made about the experiments. Radin’s experiments had large Ns and were carefully controlled and well designed. 2 out of 4 of them reached statistical significance, with very low p values. Another was in the predicted direction, but did not meet the cut off for significance (p was about .1) Another was in the predicted direction but not near significance.

    The blog author at first misunderstood the point of the experiments and dismissed them as worthless. I carefully explained what the experiments were about and why they should not be ignored.

    He responded by saying the results were “essentially negative,” which is obviously not even close to being true.

    The effect being studied is expected to be small and that’s why there were many trials per subject (a large N). There is NOTHING wrong with studying effects that are small, and using statistics! Radin is not looking for clinical significance — he is investigating an interesting phenomenon.

    Anyone can read the article, which is linked from the blog, and see for themselves that the blog author is being deliberately misleading, and has little or no understanding of what Radin’s research is about, or about experimental research in general.

    When a result falls short of statistical significance it does NOT mean there is no real effect! Especially of other similar experiments, by various researchers, did find an effect! Missing significance often means inadequate power or some measurement problem, or something that can be corrected.

    Experimental research is not simple! It usually requires repeated attempts and adjustments, as well as replication in other labs.

    Saying Radin’s results were essentially negative — when half were extremely positive and NONE were negative — shows a complete lack of interest in truth.

  45. pec says:

    And by the way, that blog’s author has used the same kind of tricks when other experiments he didn’t like were discussed. He thinks all his readers are so ignorant and trusting and biased, they won’t even check to see if his statements are true or not.

  46. Harry says:

    Pec, I got $20 on “That blog’s author” if I have to choose between he and you on who would win in a logic fight. And a fist fight too for that matter…. Steven fights /mean/.

  47. Harry says:

    When a result falls short of statistical significance it does NOT mean there is no real effect! Especially of other similar experiments, by various researchers, did find an effect! Missing significance often means inadequate power or some measurement problem, or something that can be corrected.

    Actually, when there is no statistical significance in the result that does mean that there is no effect. Or it means that there are 2 effects that exactly cancel each other out. Some times there are several effects that when integrated only make it appears that there is no statistical significance.

    If other similar experiments by various researchers did find an effect whereas others did not then it is bad science. Science needs to be reproducible, if a study is not reproducible than there needs to be an adequate explanation why. With psuedoscience and poorly controlled studies it is not surprising that some studies will find effects due to methodological errors.

    Can we find any common ground? Is there any aspect of psuedoscience that you find doesn’t pass the smell test? Astrology? Acupuncture? Iridology? Faith Healing? If you and I can find some common ground maybe we can work out together the similarities between b.s. that you see and don’t see.

    I’m procrastinating for a test on Monday so I got the time.

  48. pec says:

    Harry, you did not read the article. All 4 experiments were in the predicted direction, 2 were significant to a very high degree, and other researchers have found the same effect.

    There can be many reasons why 2 of the experiments missed significance to some degree. It is entirely unwarranted to accept null in a case like that.

    And saying the results were “essentially negative” is just plain wrong.

  49. trrll says:

    But, as I explained, it is not possible to draw the intended conclusion from this correlational data. Autism did not decrease when mercury was removed from vaccines, or the amount decreased. But that does not necessarily mean that mercury in vaccines is harmless or does not sometimes cause autism. The rate of autism was increasing at the same time (because of increasing diagnosis, other environmental toxins, or whatever), so any decrease related to removing mercury could have been offset by that increase.

    One of the things that research experience teaches is to make your hypotheses explicit, because it is very easy to fool yourself with the sort of vague handwaving that you indulge in here. I’ve asked you twice before, I’ll ask you again: What is it your hypothesis to explain the negative results of these three large population studies? Is it your hypothesis that, purely by coincidence, rates of diagnosis (or exposure to unspecified “other environmental toxins”) increased at different times in Sweden, Denmark, and California, and that the magnitude and timing of these separate increases in diagnosis was coincidentally just sufficient to mask the reduction due to decreased use of thimerosal?

  50. trrll says:

    The effect being studied is expected to be small and that’s why there were many trials per subject (a large N). There is NOTHING wrong with studying effects that are small, and using statistics! Radin is not looking for clinical significance — he is investigating an interesting phenomenon.

    There are good reasons why scientists are skeptical of results for which the effect size is very small. Pretty much anybody who has ever done actual laboratory science has been burned by them: you do an experiment, and you detect a small, but statistically significant signal down close to the level of statistical noise. So you improve your methodology, make everything more reproducible, get the noise down–and your “signal” goes away. Why are small effect sizes so problematical? Because it is never possible to entirely eliminate every possible bias from an experiment. When your signal is down close to the level of noise, even tiny errors or inconsistencies can produce a phantom signal that can be statistically highly significant. It is highly significant because it reflects a genuine difference between the experiment and the control. Unfortunately, it is a genuine, but very small, artifact.

  51. Wholly Father says:

    pec

    You made these statements in this thread: “low power increases the chance of type 2 errors, not of type 1 errors.” AND “low power greatly increases the chance of false negatives, NOT false positives!”

    Your statements are technically true, but miss the point. The real question is: what is the positive predictive value of rejecting the null hypothesis (positive predictive value is the ratio of true positives to all positives). Consider the extreme example of a study so underpowered that it has zero chance of a true positive. In that case any rejection of the null hypothesis is a false positive.

    For a more sophisticated discussion of this principle read this article:

    http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

  52. dugmaze says:

    Why does this site keep repeating anti-vaccine?

    No one I know in the autism community is anti-vaccine.

    If you want to appear credible, then drop the attacks.

  53. dugmaze says:

    Why did they take thimerosal out of vaccines?

  54. dugmaze says:

    Isn’t the Danish system a universal health care system?

    We can’t make our own studies? If we have to use seven year old studies where the coauthors are the country’s only vaccine maker then what are we going by in other areas of medicine.

    I just became a beliver in universal health care.

  55. kim spencer says:

    SF Mom and Scientist:
    My OB/gyn most certainly does make flu shots with thimerosal available to his pregnant patients. he does not offer them to everyone, like lots of doctors, but he did try to get thimerosal free shots this year and could not get them. it’s happening in most offices. i suggest you call a few and ask. or even better, go to them and ask to see the package inserts, because lots of them have no idea what’s in the shots. until you do exactly that, you have no idea what is really going on in the offices that treat pregnant women.

  56. David Gorski says:

    Oh, geez. J.B. Handley, in his usual characteristic bull-in-a-china shop idiocy and obvious attempts to poison Steve’s Google reputation, has launched an all out frontal assault on this article and Steve:

    http://www.ageofautism.com/2009/04/dr-steven-novella-why-is-this-so-hard-to-understand.html

  57. Karl Withakay says:

    dugmazeon,

    “No one I know in the autism community is anti-vaccine.

    If you want to appear credible, then drop the attacks.”

    I might believe you if you meant that no one you know personally in the autism community is anti-vaccine, but otherwise if YOU want to appear credible, don’t deny the facts, and please don’t play the “green our vaccines” or “too many too soon” gambits.

    “Why did they take thimerosal out of vaccines?”

    They finally gave up trying to get your crowd to listen to science and pulled it out of vaccines to get you guys to shut up, and they did it out of fear that an unproven, unsupported hypothesis might be true (figuring that it’s better to be safe than sorry).

    There wasn’t any scientifically supported reason to pull thimerosal out of routine childhood vaccinations.

    And the net difference in Autism rates since thimerosal was removed? No reduction in autism rates, so why do some anti-vaxers keep bringing up thimerosal? That horse is an unrecognizable, bloody pulp on the floor.

  58. trrll says:

    Why does this site keep repeating anti-vaccine?

    Because as scientists, we have a tendency to follow Occam’s Razor. So when an organization or group first insists that vaccines cause autism by producing unrecognizable measles infections…and then when that is shown to be false, they claim that autism is actually undiagnosed thimerosal poisoning…and then when thimerosal is reduced and autism continues to climb, they insist that autism is due to “toxins” in vaccines, many of which are normally present in the body at much higher levels, or that the immune system of an infant that responds to huge numbers of environmental antigens is unable to tolerate even a few more…we gravitate to the simplest explanation: these guys are simply anti-vaccination.

  59. Delphi Ote says:

    “But your correlation studies are full of problems. If we are interested in whether thimerosal can cause neurological damage — and it seems to me we are — then it makes sense to do some kind of experimental research. You have to start somewhere.” -pec

    We did start somewhere. Correlation analyses were the only reason anyone ever suspected autism and thimerosal were related in the first place. Stronger and better designed correlation analyses show no trend. There’s no reason for further study. There’s no plausible mechanism for thimerosal to cause neurological damage. Aside from paranoia, there’s no reason to suspect these things are related anymore. The end.

  60. emmy says:

    I registered for this site just so I could say this: thank you for continuing to fight the ignorance. It must be so exhausting. I know because I wore myself out arguing about climate change, ’til I just gave up, realizing that fact-based arguments would never stop the denier-zombies. I have a brother who became mentally ill in his twenties, but before he became very obviously out of touch with reality he just seemed to lose his sense of logic. It was quite mystifying to have a spirited discussion with him during that period because he just didn’t have the power to reason consistently. Fortunately, our disagreements were inconsequential, so I could just nod and smile and walk away. Children’s lives were not at stake. We have had at least one death in my community of a child who was not properly vaccinated, and whose parents profoundly regret it now. We also have undeniable increases in whooping cough and measles in the schools. I don’t understand why so many Americans seem to rely so much more on superstition than on science, but they do. So, anyway, I applaud you. Keep up the fight. You may actually save the lives of some kids.

Comments are closed.