The Iraqi Civilian War Dead Scandal

This is a story about a story and a story or two within that story. The first story is one of faulty epidemiology – data collection in a war zone. The first inside one is how medical news and journals affect not only national news, but are being used as political weaponry, to affect elections , and to change history.

Within that story is yet another – how editors contribute to fabrication, accepting or refusing to recognize fraud and misinformation. Yet another is that one cannot change some opinions, even after showing that the original information on which they were based was false. Sound familiar? We’ve been illustrating the point in classes for years.

The Iraq death studies. In 2004, weeks before the US presidential election, the journal, The Lancet published a study from a group at Johns Hopkins University, of Iraqi civilian deaths since the 2003 invasion (Lancet I). The results were unseemly high; a UN group estimated the deaths to be about one tenth of the Lancet’s report. The allied forces were still receiving approval for deposing Saddam Hussein, and the world press did not publicize them.

Then, 2-3 weeks before the 2006 US national congressional elections, with the Iraqi war wearing on and US and the world public tiring of stalemate and casualties, Lancet published a follow-up study (Lancet II) by the same group, concluding that in the years 2003-2006, Iraqi civilian war related deaths exceeded 600,000. It was shocking, made headline newspaper and television news. The study had such a significant impact partly because of where it appeared. The Lancet, despite its spotty record for off-beat articles, is revered by the public and the press. If the article’s publicity did not create a wave of political disapproval, it at least helped whip up the waves of discontent, washing in a major change in the Congress. Criticism of the study at the time seemed drowned out by its publicity. But a recent repeat study of civilian Iraqi deaths brings new light on the Lancet II study.

The method: The method applied was derived from ones used in famine and natural disasters, not war zones where all sources are moving targets -some literally – and there are motivations to slant and to lie. There were no checks on the data by other observers. The study authors were admittedly biased against American efforts and the war. The data collection was left in charge of one Iraqi physician researcher, whose staff were actually employees of Moktada al-Sadr, the now anti-US Shiite religious leader. Researchers interviewed clusters of households selected throughout the country. According to the summary in the National Journal, (January 4, 2008) the design for Lancet II committed eight surveyors to visit 50 regional clusters (the number ended up being 47) with each cluster consisting of 40 households. By contrast, in a 2004 survey, the United Nations Development Program used many more questioners to visit 2,200 clusters of 10 houses each. This gave the U.N. investigators greater geographical variety and 10 times as many interviews, and produced a figure of about 24,000 excess deaths — one-quarter the number in the first Lancet (Lancet I) study. The Lancet II sample was so small that each violent death recorded translated to 2,000 dead Iraqis overall. With such a small sample, small multiplier variations and errors would become magnified easily.

Other death estimates for the same 2004–2006 period, especially from the Iraq Body Count organization, were considerably less, in the range of 100 000 + -, or 1/5th to 1/6th of the Johns Hopkins Lancet II study.

Last week came yet another study and calculation from The Iraq Family Health Survey Study Group in association with the World Health Organization reported in the New England Journal of Medicine. This group used similar interview techniques of family clusters but with a larger population and better controlled methods, and arrived at an estimated excess civilian death total of 151 000 (+ – 50 000), for the same period of 2003-2006. [I cannot comment on this study because I am not familiar with its methods, nor had the time to become so before this writing. I will take it at face value despite its exceeding my armchair estimate by at least 3 times. I must allow that the NEJM study is more accurate than my armchair estimate. ]

Why the differences among estimates? The question comes down mainly to, why was the Lancet II so far off from other estimates? The Iraq Body Count was made by collecting weekly deaths reported in news reports, in a similar way to my first armchair estimates (see below) but from documented news reports. The most recent report took lessons from the errors of Lancet I and II and tried to improve on the methods. Evidence of problems (Lancet II, 2006) – from errors to possible fakery: [Much of the source for this part of the report is from the Jan. 8 issue of the National Journal.]

First, the investigators included deaths from the July 6, 2006 Sadr City car bombing with 60 deaths when the study was to end June 30. Included in the base figure, the multiplier erroneously magnified the result.

Next, reviewers found a lack of normal or expected distribution of deaths in neighborhoods – instead of more closely clustered deaths, they found them distributed evenly among large numbers of homes in areas where there were single, large explosions. This suggested “curbstoning” or creating data (described later.)

Oversight. To undertake the first Lancet study, according to National Journal, “investigator Les Roberts went into Iraq concealed on the floor of an SUV with $20,000 in cash stuffed into his money belt and shoes. Daring stuff, to be sure, but just eight days after arriving, Roberts witnessed the police detaining two surveyors who had questioned the governor’s household in a Sadr-dominated town. Roberts subsequently remained in a hotel until the survey was completed. Thus, most of the oversight for Lancet I — and all of it for Lancet II — was done long-distance. For this reason, although he defends the methodology, Garfield [an investigator with Lancet I] took his name off Lancet II. ‘The study in 2006 suffered because Les was running for Congress and wasn’t directly supervising the work as he had done in 2004,’ Garfield told National Journal.” [Roberts admits feeling that social activism is part of doing sociological studies.]

More incriminating is the fact that investigators have declined to open their original data to inspection and for confirmation, even to confirm that the work was done as stated. This violation of scientific ethics they explained away by claiming possible threats to the lives of the data gatherers and interviewees. Evidence for falsification mounted.

Funding: Most funding for both Lancet I and Lancet II came from J Tirman, professor of political science at MIT, using funds from George Soros’s Open Society Institute. plus other funding. Soros is the now-famous billionaire political backer of Code Pink, anti-commercial street demonstrations, anti-US policies and opponent of the current administration. The idea of studying American-caused Iraqi civilian deaths has itself a probable political genesis.

Rationale: According to the National Journal, the Lancet Editor explains away the publishing of errant reports: […Horton shares a fundamental faith in scientists]. He told NJ that scientists, including Lafta [the major Iraqi physician data gatherer,] can be trusted because ‘science is a global culture that operates by a set of norms and standards that are truly international, that do not vary by culture or religion. That’s one of the beautiful aspects of science — it unifies cultures, not divides them.’ The NJ authors comment: “The authors refused to provide anyone with the underlying data, according to David Kane, a statistician and a fellow at the Institute for Quantitative Social Statistics at Harvard University.” Some critics have wondered whether the Iraqi researchers engaged in a practice known as “curb-stoning,” sitting on a curb and filling out the forms to reach a desired result. Another possibility is that the teams went primarily into neighborhoods controlled by anti-American militias and were steered to homes that would provide information about the ‘crimes‘ committed by the Americans.

Fritz Scheuren, vice president for statistics at the National Opinion Research Center and a past president of the American Statistical Association, said, ‘They failed to do any of the [routine] things to prevent fabrication.’”

Estimating from the armchair. When I first saw the publicity over Lancet II, I did a quickie estimate of my own, using what I read from WW II intelligence gathering. The Allies estimated German military unit troop strength from home town newspaper announcements of local soldier furloughs – both the frequency and length – from knowledge of their location and the constancy of strength necessary for combat readiness. From US Newspaper summaries, one could estimate 100-200 civilian deaths per week. At that rate, one would expect a death rate of 5 000-10 000 per year, or somewhere between 17 000 and 35 000 deaths in the 3.25 years covered by Lancet II. That estimate was more than one order of magnitude less than that of Lancet II. The Iraq Body Count group in fact estimated a number in the same order of magnitude, 4000-5000 – one tenth of the Johns Hopkins- Lancet II calculation.

Another way of looking at the data is to estimate the “normal” death rate. The Lancet study authors did that and concluded that their 650K death total was “excess” or over that normally expected. But for the record, Iraq, with a population of 27 million, and an estimated life span of 60 years after rounding off (30million at 60 years) would have 500K peacetime deaths/year. The war related deaths, if truly 600K over 3.25 years, or 200k/year, would have been just over 500 deaths per day in addition to the baseline normal 1300 deaths/day.

Critics found yet another figure that I thought of also, after deciding to write this. US battlefield ratio of wounded individuals to deaths is 8-10:1. Assuming 10:1 ratio, there would have been 6 million wounded Iraqi civilians seeking medical care over 3 years, or 10,000 wounded per day. No hospital or other source data could reasonably have confirmed such figures.

These little numbers games are more than just a show and tell for a skeptical blogger. A simple set of thought experiments, using sixth grade arithmetic, should have given the Lancet editors and reviewers an easy exit strategy from publication.

So what’s with the editors? The investigators have already admitted their political biases and even admitted that a condition of publication of both Lancet I and II was that they be put on a fast track and be published immediately before the US elections of 2004 and 2006.

Thus there must have been collusion between the investigators and the editors to timely publication in an attempt to affect the US elections.

This political move in a scientific publication was not new, of course. The second 1990s presidential election during the “intern scandal” was the presumed target of the fast-tracking of a long-lingering JAMA manuscript on teen-age perception of the meaning of “sexual relations.” Whether true or not, the publication resulted in the firing of the editor.

There has been remarkably little commentary on The Lancet editors’ actions, even in the US. This has probably been in deference to current political temperaments, frustrations with the Iraq war, and the current unpopular presidency.

The current editor deflected responsibility to the Johns Hopkins authors, with the following comments, also quoted in National Journal: “…science is a global culture that operates by a set of norms and standards that are truly international, that do not vary by culture or religion. That’s one of the beautiful aspects of science — it unifies cultures, not divides them…

“…The possibility of fakery… ‘did not come up in peer review. Medical journals can’t afford to repeat every scientific study…because ‘if for every paper we published we had to think, ‘Is this fraud?’ … honestly, we would fold tomorrow.’”

But the editor answers a question not asked. There are ways to detect or at least to suspect misrepresentation as well as error. In 2003 the US National Institutes of Health sponsored a two-day international conference on scientific fraud. The Lancet’s editor was not only there, he was the keynote speaker.

The simple math estimates above were evident had anyone “asked,” especially when the figures reported were so excessive. But the reported figures probably fit well with the editors’ political mind-set – an anti-US expression common in some quarters of the UK. The editor’s political sentiments are recorded in a segment of You tube.

As for the intent of the publication, to affect the two US elections, there can be little doubt that the 2006 election was a referendum on US policy in Iraq, that the widespread publicity affected US public opinion. The 600,000 figure continues to be quoted in political speeches and tracts despite attempts at correction.

In either case – whether from blindness or complicity, the editorial staff of Lancet owe the scientific community a better explanation and a request for retraction of the Lancet II paper. There is no formal method for initiating or carrying out those activities.

Thanks to National Journal for its thorough reporting.

Posted in: Politics and Regulation

Leave a Comment (44) ↓