Shares

ResearchBlogging.orgOn April 30, outside the courthouse in Dallas, a press conference/rally was held. This particular rally was in response to a new study published by a group led by Dr. Raymond F. Palmer in the Department of Family and Community Medicine at the University of Texas Health Science Center in San Antonio, whose conclusion was that autism prevalence correlates strongly with proximity to mercury-emitting coal-burning power plants and other industrial sources of airborne mercury, the implication being that such sources of mercury may be causal or contributory to the development of autism. Unfortunately, the rally was reported by the media as though this study were slam dunk evidence that mercury environmental mercury is a definite contributor to the development of autism. For example, there is some video (also here) from local news sources of the rally, in the first of which it is stated as fact that mercury caused autism in the child featured in the story and in the second of which a mother who thinks that mercury causes autism is quoted credulously. This study has had much less play in the national news, but antivaccination activists, such as the ones at the Age of Autism website, a site whose main theme is that either mercury in the thimerosal preservative that used to be in childhood vaccines before 2002 or vaccines themselves cause autism, both promoted the rally and posted a glowing and credulous take on the study, as did “alternative medicine” and antivaccinationist website NaturalNews.com.

My first thought upon reading of this is that it is yet more vindication of the science showing that the claim that mercury in thimerosal-containing vaccines is a failed hypothesis. After all, as I have predicted time and time again, as the scientific and epidemiological evidence continued to mount that thimerosal is just plain not associated with autism or autism spectrum disorders, even the most diehard adherents to this belief are starting to realize that they were backing a losing horse, especially since thimerosal was removed from all childhood vaccines other than the flu vaccine in 2001, leaving only trace amounts from the manufacturing process and there is no sign that autism prevalence is falling. That’s why lately, their effort has shifted from primarily demonizing mercury to blaming other “toxins” in vaccines, even to the point that their efforts to demonize some ingredient–any ingredient–in vaccines often reaches ridiculous levels of blatant silliness, such as touting sucrose as one of those “toxins.” Indeed, I was puzzled. If environmental mercury is the new cause of autism, then the rationale antivaccinationists use to demonize vaccines and portray their children as “vaccine-damaged” is much less potent. Why on earth would they tout this study, which, even if a good study (and it’s not), would weaken their arguments against vaccines immeasurably and take power away from their whole new propaganda slogan “Green Our Vaccines”? The only reason I could think of is that perhaps they somehow think that if mercury in the environment can be linked to autism that maybe–just maybe–they can convince people that they were right about mercury in vaccines all along. Indeed, this seems to be the sort of tack that David Kirby took a year ago when he started arguing that mercury emissions from coal-burning power plants in China (which do reach California), coupled with mercury emission from crematoria in which cadavers with mercury fillings were burned, were contributing to the continued increase in the autism caseload in California despite the elimination of thimerosal in 2001.

But what does the study say itself? Is it good evidence that airborne mercury from coal-fueled power plants is an important contributor to the development of autism? I will argue no, because the study’s flaws are so innumerable that it is well nigh uninterpretable. For simplicity’s sake, to summarize its findings, I’ll quote a Science Daily press release about it:

A newly published study of Texas school district data and industrial mercury-release data, conducted by researchers at The University of Texas Health Science Center at San Antonio, indeed shows a statistically significant link between pounds of industrial release of mercury and increased autism rates. It also shows–for the first time in scientific literature–a statistically significant association between autism risk and distance from the mercury source.

“This is not a definitive study, but just one more that furthers the association between environmental mercury and autism,” said lead author Raymond F. Palmer, Ph.D., associate professor of family and community medicine at the UT Health Science Center San Antonio. The article is in the journal Health & Place.
Dr. Palmer, Stephen Blanchard, Ph.D., of Our Lady of the Lake University in San Antonio and Robert Wood of the UT Health Science Center found that community autism prevalence is reduced by 1 percent to 2 percent with each 10 miles of distance from the pollution source.

“This study was not designed to understand which individuals in the population are at risk due to mercury exposure,” Dr. Palmer said. “However, it does suggest generally that there is greater autism risk closer to the polluting source.”
The study should encourage further investigations designed to determine the multiple routes of mercury exposure. “The effects of persistent, low-dose exposure to mercury pollution, in addition to fish consumption, deserve attention,” Dr. Palmer said. “Ultimately, we will want to know who in the general population is at greatest risk based on genetic susceptibilities such as subtle deficits in the ability to detoxify heavy metals.”

The new study findings are consistent with a host of other studies that confirm higher amounts of mercury in plants, animals and humans the closer they are to the pollution source. The price on children may be the highest.

Now, let’s take a look at the reasons why I consider this to be a very poor study and equally poor evidence arguing for a link between proximity to coal-burning power plants and industrial sources and autism prevalence. First, you should know that the present study is a followup to a widely-criticized study that Dr. Palmer published in 20061. That study purported to show the same thing but was viewed as uninterpretable for a variety of reasons, with its most glaring flaw being that it failed to control for urbanicity of the populations being studied. Particularly harsh was Thomas A. Lewandowski:

Lastly, the authors found that the most important determining factor for autism prevalence in their study was whether the child lived in an urban, suburban, or rural area. For example, residence in an urban school district resulted in a 473% higher rate of autism compared to rural districts. Similar findings have been reported by others (e.g., Deb and Prasad, 1994). The urbanization effect is nearly 8 times stronger than the effect suggested for mercury but is given relatively little discussion and is not even noted in the abstract. Since levels of many pollutants (including mercury) would be strongly correlated with urbanization/industrialization, this also leads one to question the mercury-autism association the authors report. More detail on the impact of residence would have been helpful. Was one particular urban area (e.g., Dallas, Houston, San Antonio) responsible for the effect? Did the authors explore how data for other chemicals correlated with autism incidence? Certainly a host of environmental and social variables associated with urbanization could be investigated as possible factors in autism. Alternatively, an increased tendency for diagnosis in urban localities could explain at least part of the increased incidence.

Dr. Palmer’s 2006 study found a far larger correlation between urbanicity and autism prevalence than anything that proximity to sources of mercury emission could account for and was so full of holes that even antivaccinationists had a hard time defending it. Indeed, when I went back to look at this study, of which I had previously been unaware, I got the distinct impression that Palmer is a man with an axe to grind. This new study2, clearly carried out to answer that major criticism, does nothing to change my mind. The first thing I noted upon reading the introduction was that Dr. Palmer approvingly cites the infamous “baby hair mercury study” (Holmes et al), and from that I knew right away where he is probably coming from. That particular paper, a favorite of the mercury militia wing of the antivaccinationist movement, was a load of poorly designed garbage bordering on, if not actually, pseudoscience. He also approvingly and uncritically cites the even worse Bradstreet et al study claiming to show that autistic children excrete more mercury in the urine. If you want to get an idea just how bad this study is, consider that it was co-authored with Mark and David Geier. (Say no more.) Where on earth were the peer-reviewers? So let’s look at this study in more depth and decide if it’s worth taking seriously. There are two huge flaws in this study, how autism prevalence was calculated and how urbanicity was controlled for, and a depressingly large number of lesser ones

Autism prevalence and the study hypothesis

Whenever looking at a study, it’s very useful to look at the hypothesis and then decide whether the methods and data analysis are appropriate and adequate to answer the question asked in the hypothesis. Indeed, whenever I write a research paper or a grant, I always include in the introduction a paragraph starting with something along the lines of “we hypothesized that” or “the hypothesis we plan to test is that,” followed by a statement and justification of the hypothesis being tested (or, in the case of a grant, to be tested). Perusing the Palmer et al article, the closest I could find to a statement of hypothesis was this:

The objective of the current study is to determine if proximity to major sources of mercury pollution is related to autism prevalence rates.

Fair enough, as far as the statement goes, although it is somewhat vague. When reading a paper, though, it’s also important to see if the authors’ statement of hypothesis actually jibes with the hypothesis that their methodology is designed to test. Often embedded in the methodology are assumptions that the methodology and analysis must account for, and there’s a glaring disconnect between the simple statement of hypothesis as described in the sentence above and the actual hypothesis that the study was designed to test. First, consider how the study was done. Buried in the Methods secion is a description that is quite revealing. Data from 1998 for 39 coal-fired power plants and 56 industrial facilities in Texas were examined and modeled to see if the distance from these plants of various school districts correlated autism rates in 2002, with this being the rationale:

…it is plausible to postulate that releases during 1998 would have exposure potential for a cohort who was in utero in 1997. If an effect was present, this would be reflected in the 2002 school district records–the age (5 years old) this cohort would be entering the system.

So, from reading the Methods section, I conclude that the real hypothesis being tested, although not stated explicitly in the introduction, appears to be that exposure to mercury in utero contributes to autism, not that infant or childhood exposure to mercury is related to autism prevalence. Why else would the authors have examined mercury emissions and autism prevalence in 1998 and then modeled this against data about special education services for children with autism and ASDs in 2002? Now, the next step is to see if this methodology actually tests the hypothesis. Not surprisingly, there are a number of problems. First among these is that it is quite unclear exactly what data from the Texas Education Agency were used. Apparently Dr. Palmer used some sort of special data set provided by the TEA that is not publicly available. However, the most glaring error, one that in and of itself is enough to sink the study is this:

Total number of students reflects all enrolled students in the districts 2002 school year and was used as the denominator in calculating autism rates.

Later, under the Statistical Methods section, Palmer writes:

District autism data in 2002 were treated as event counts and used as the outcome in a Poisson regression model predicted by pounds of environmental mercury release in 1998, distance to sources of the release, and the relevant covariates. Total number of students enrolled in each district for 2002 defined the rates for each district

Wait a minute! How could this be? Someone correct me if I’m wrong, but doesn’t the model being tested assume that in utero exposure to mercury as assumed by distance from coal-burning power plants in 1997 would correlate with autism prevalence in the five year old cohort entering Texas public schools in 2002? Yet it appears that autism prevalence in 2002 was calculated using the total number of students enrolled that year, not the number of students entering kindergarten; i.e., the number of students who were exposed to the levels of pollution in utero in 1997. If this statement of methodology is accurate (and I have no reason to doubt it), it tells us that Palmer was using figures for autism prevalence (actually numbers of students needing special education assistance for autism or ASDs) in 2002 for all students, K through 12. Since this includes children up to age 18 and since autism is usually diagnosed by age 5, this methodology would necessarily include a large number of autistic children who would already have been diagnosed with autism in 1997, the year for which figures for mercury emissions from power plants and industrial sources were used in the model. In other words, Palmer included in his dataset far more children whose autism, using his own hypothesis, could not possibly have been related to mercury emissions than he did children who might have been susceptible.

This flaw alone, originally pointed out by Michelle Dawson, if I have not missed something, makes the results of this study completely uninterpretable. The correct methodology would have been to compare autism prevalence in the kindergarten (or, if using loose criteria, perhaps the kindergarten through third grade cohort) with mercury emissions. Moreover, the perils of using special educational services data as a surrogate for true prevalence are well known.

Controls for urbanicity

In epidemiological studies, perhaps the most difficult part of study design is to develop methodology that controls for confounding variables. Confounders are commonly variables that are related to both of the variables showing a correlation and that could therefore be the “true” cause underlying a phenomenon. For example, say that a study finds that variable A and variable B are strongly correlated. Let’s for the sake of this example say that as A rises so does B. That may or may not be evidence that A causes B. For example, if there is a third variable (variable C) that causes A to increase and also causes B to increase, then the correlation that we see between A and B is not a cause-and-effect correlation; rather, both are correlated because C causes both of them to rise. Consequently, controlling for C is very important. Unfortunately, in epidemiological studies, there are often many such “C” variables that can confound correlations unearthed by a study.

That’s why the most potent criticism of Palmer’s 2006 study was that he did not try to control for urbanicity. It’s easy to see how urbanicity might be a confounding variable in this study. After all, urbanicity is clearly correlated with proximity to sources of industrial pollution, as well as more awareness of autism and more services for children with autism and their parents. There could also be a wide number of confounders in urbanicity itself as well. Consider that in Palmer’s 2006 study, the effect of urbanicity on autism “prevalence” was several times stronger than that of proximity to mercury-emitting power plants. Palmer hardly mentioned this in his original study, but Lewandowski certainly saw that problem right away. Thus, the question about the current study is whether Palmer adequately addressed this problem and correctly controlled for urbanicity. Here’s how he describes his methodology to accomplish this:

Urbanicity. Eight separate demographically defined school district regions were used in the analysis as defined by the TEA: (1) Major urban districts and other central cities (2) Major suburban districts and other central city suburbs (5) Non-metropolitan and rural school districts In the current analysis, dummy variables were included in the analysis coding Urban (dummy variable 1, and Suburban (dummy variable2), contrasted with non-metro and rural districts which were the referent group. Details and specific definitions of urbanicity categories can be obtained at the TEA website http://www.tea.state.tx.us/data.html.

Blogging at Left Brain/Right Brain, Joseph argues compellingly that Palmer failed to control adequately for urbanicity. His complaints are:

  1. It is too discrete. Within the set of urban districts, some districts will be more urban than others. The same is true of rural districts. Palmer et al. (2008) is effectively using a stratification method to control for urbanicity, but this method is limited, especially considering the paper looks at 1,040 school districts. A better methodology would be to use population density as a variable.
  2. Modeling for distance. The paper models autism rates based on distance to coal-fired power plants. It follows that a control variable should model distance to urban areas rather than urbanicity of each district. Granted, this would not be easy because, as noted, urbanicity is not a discrete measure. But it needs to be noted as a significant limitation of the analysis. Consider school districts in areas designated as “rural” that are close to areas designated as “urban.” Such proximity would presumably provide access to a greater availability of autism specialists than would otherwise be the case.

To support his arguments, Joseph goes to the trouble of doing just such an analysis on similar data from California. He modeled population density versus special education service data for children with autism and ASD and found that the association between autism prevalence and mercury emissions disappears once population density is accounted for. His post is worth reading in its entirety, as he makes a strong case that population density is a better control for urbanicity than the methodology that Palmer used. His analysis is entirely consistent with other published data that show that population density is strongly correlated with autism diagnoses3.

Other flaws

There are also numerous other deficiencies in the design and methodology of the study, as one might expect. Another glaring flaw is that Palmer appears not to have done any tests to see if distance from power plants correlates with some other confounding variable other than urbanicity and wealth. True, he did try to control for urbanicity, mainly because urbanicity correlates so strongly with autism awareness and access to resources, making it not surprising at all that in his earlier study autism prevalence correlated far more strongly with urbanicity than with mercury emissions. It also correlates with population density, many other forms of pollutants such as auto exhaust, and many other potentially confounding variables. Other serious flaws that I found include:

  • The method for calculating distance from power plants. Basically, Palmer took the geographic center of each school district, measured the distance from that point to the nearest power plant, and then used that distance for every child in the district. Remember, this is Texas we’re talking about. Some of these school districts are quite large. However, Palmer’s methodology averages the distance out for every child in the district.
  • Moves. The authors appear to assume that no one moves in or out of the district.
  • No examination of wind effects. The underlying hypothesis here appears to be that mercury carried on the wind is what correlates with autism prevalence. If that were the case, then it would be expected that the effect would be much stronger in school districts that, based on the general direction of the prevailing winds, are downwind from a power plant. Palmer didn’t even consider this variable.
  • No comparison to other regions. The EPA has produced a very nice map showing the distribution of deposition of mercury on a global basis. It also has another good map that shows mercury deposition in the U.S. from all sources. Texas has few “hot spots,” while in the U.S. the Northeast and Midwest have many and China is one continuous hotspot. If Palmer’s hypothesis is true, it would have been nice of him to include a spot check of autism prevalence rates in a state with a lot of coal burning power plants, such as West Virginia. Also, if his hypothesis is true, he needs to explain why Texas, which has a significant deposition of mercury–particularly in its more heavily populated areas–has about the same autism prevalence (according to educational data from the USDE) as Idaho, which has much less mercury deposition. Palmer would also have to explain why Pennsylvania (which appears to be covered in mercury) has a lower autism prevalence than Minnesota and Oregon. Just looking at autism prevalence data and then comparing it to these two maps would be enough to show that Palmer’s hypothesis is not even plausible, much less supported by data.
  • Other sources of pollution. Coal-burning power plants emit many more pollutants besides mercury. Remember my discussion of confounding variables above? Palmer didn’t control for whether there were other pollutants that were associated with mercury emissions that might be the real environmental culprit, if environmental culprit there actually is.
  • Other sources of mercury exposure. Even in airborne mercury, industrial emissions are not the only cause. The EPA states that one third of mercury emissions do not derive from human activity. Palmer also didn’t take into account sources of mercury in the diet; for instance, from fish.

Lewandowski summarized other serious criticisms of Palmer’s 2006 study. Although Palmer tried (unsuccessfully) to answer his most serious criticism (failure to control for urbanicity), the others still stand:

Palmer et al. used county-level (and school district-level) TRI data for mercury as a surrogate measure of mercury exposure. The authors note in their introduction that mercury emitted into the air may be carried many miles before being deposited to soil or water. This is critical. Air modeling analyses indicate that mercury deposition that occurs in the west of the nation (including Texas) is overwhelmingly attributable to Asian or other non-US sources (Seigneur et al., 2004). Texas is a significant source of mercury emissions, but the mercury from these emissions is largely deposited hundreds to thousands of miles to the east. It is therefore highly unlikely that mercury emitted in a particular county or school district can be correlated with air mercury exposures in that locality.

TRI data also do not specify mercury species or the environmental medium to which the mercury is released. The likelihood of human exposure (and resulting toxicity) is highly influenced by these factors. For example, community exposure to inorganic mercury present in coal fly ash shipped to an off-site disposal facility will be zero. Releases to surface water bodies may also have a very different exposure potential than releases to air.

The authors also acknowledge that fish consumption is the primary source of human exposure to mercury. Fish mercury exposures in the general population are primarily associated with ocean caught fish, such as tuna or swordfish (Carrington and Bolger, 2002; Dabeka et al., 2004). Mercury levels in ocean fish are impacted by releases on a continental rather than a county-wide scale. Even for freshwater fish, which may be sources of mercury intake for a limited number of individuals, the mercury will most likely be attributable to distant sources. Local mercury releases (as described by the TRI data) should therefore not be used as a surrogate variable for actual mercury exposure. Because TRI-mercury releases on the county or school-district level are unlikely to be correlated with actual mercury exposures in the same geographic regions, it seems implausible that the observed association between mercury release rates and autism prevalence represents a real biological phenomenon.

All of these are very serious criticisms, and Palmer’s study answers none of them.

No one, least of all I, claims that living near a coal-burning power plant is a Good Thing or in any way perfectly fine for one’s health. EPA regulation or not, such plants still spew pollution into the air, and the adverse effects of industrial pollution on human health are well documented. However, that is not the question that Palmer is attempting to answer. He has made a specific claim, namely that mercury exposure from such power plants and other industrial sources, with proximity to such plants used as a surrogate for exposure, correlates with autism prevalence. It is thus implied that environmental mercury causes or contributes to the development of autism. Unfortunately for Dr. Palmer, the numerous enormous flaws in the methodology of his study undermine that conclusion and clearly show that his conclusions do not follow from his data.

It’s entirely possible that some environmental factor, or factors, may contribute to the development of autism, either on their own or by acting with some genetic susceptibility, but if that is the case with mercury this study is thin gruel to use to support such a hypothesis. In fact, it’s not even that good as a hypothesis-generating study. There’s just too much potential interference from confounding variables that hasn’t been accounted for. I have to wonder about the quality of the peer review of this particular journal. After all, if Joseph and I, neither of whom are epidemiologists and one of whom is not a physician or scientist, can spot the glaring flaws in this study, why couldn’t the peer reviewers?

REFERENCES:

1. PALMER, R., BLANCHARD, S., STEIN, Z., MANDELL, D., MILLER, C. (2006). Environmental mercury release, special education rates, and autism disorder: an ecological study of Texas. Health & Place, 12(2), 203-209. DOI: 10.1016/j.healthplace.2004.11.005
2. PALMER, R., BLANCHARD, S., WOOD, R. (2008). Proximity to point sources of environmental mercury release as a predictor of autism prevalence. Health & Place DOI: 10.1016/j.healthplace.2008.02.001
3. Williams, J.G. (2005). Systematic review of prevalence studies of autism spectrum disorders. Archives of Disease in Childhood, 91(1), 8-15. DOI: 10.1136/adc.2004.062083

Shares

Author

Posted by David Gorski

Dr. Gorski's full information can be found here, along with information for patients. David H. Gorski, MD, PhD, FACS is a surgical oncologist at the Barbara Ann Karmanos Cancer Institute specializing in breast cancer surgery, where he also serves as the American College of Surgeons Committee on Cancer Liaison Physician as well as an Associate Professor of Surgery and member of the faculty of the Graduate Program in Cancer Biology at Wayne State University. If you are a potential patient and found this page through a Google search, please check out Dr. Gorski's biographical information, disclaimers regarding his writings, and notice to patients here.