Articles

The Iraqi Civilian War Dead Scandal

This is a story about a story and a story or two within that story. The first story is one of faulty epidemiology – data collection in a war zone. The first inside one is how medical news and journals affect not only national news, but are being used as political weaponry, to affect elections , and to change history.

Within that story is yet another – how editors contribute to fabrication, accepting or refusing to recognize fraud and misinformation. Yet another is that one cannot change some opinions, even after showing that the original information on which they were based was false. Sound familiar? We’ve been illustrating the point in classes for years.

The Iraq death studies. In 2004, weeks before the US presidential election, the journal, The Lancet published a study from a group at Johns Hopkins University, of Iraqi civilian deaths since the 2003 invasion (Lancet I). The results were unseemly high; a UN group estimated the deaths to be about one tenth of the Lancet’s report. The allied forces were still receiving approval for deposing Saddam Hussein, and the world press did not publicize them.

Then, 2-3 weeks before the 2006 US national congressional elections, with the Iraqi war wearing on and US and the world public tiring of stalemate and casualties, Lancet published a follow-up study (Lancet II) by the same group, concluding that in the years 2003-2006, Iraqi civilian war related deaths exceeded 600,000. It was shocking, made headline newspaper and television news. The study had such a significant impact partly because of where it appeared. The Lancet, despite its spotty record for off-beat articles, is revered by the public and the press. If the article’s publicity did not create a wave of political disapproval, it at least helped whip up the waves of discontent, washing in a major change in the Congress. Criticism of the study at the time seemed drowned out by its publicity. But a recent repeat study of civilian Iraqi deaths brings new light on the Lancet II study.

The method: The method applied was derived from ones used in famine and natural disasters, not war zones where all sources are moving targets -some literally – and there are motivations to slant and to lie. There were no checks on the data by other observers. The study authors were admittedly biased against American efforts and the war. The data collection was left in charge of one Iraqi physician researcher, whose staff were actually employees of Moktada al-Sadr, the now anti-US Shiite religious leader. Researchers interviewed clusters of households selected throughout the country. According to the summary in the National Journal, (January 4, 2008) the design for Lancet II committed eight surveyors to visit 50 regional clusters (the number ended up being 47) with each cluster consisting of 40 households. By contrast, in a 2004 survey, the United Nations Development Program used many more questioners to visit 2,200 clusters of 10 houses each. This gave the U.N. investigators greater geographical variety and 10 times as many interviews, and produced a figure of about 24,000 excess deaths — one-quarter the number in the first Lancet (Lancet I) study. The Lancet II sample was so small that each violent death recorded translated to 2,000 dead Iraqis overall. With such a small sample, small multiplier variations and errors would become magnified easily.

Other death estimates for the same 2004–2006 period, especially from the Iraq Body Count organization, were considerably less, in the range of 100 000 + -, or 1/5th to 1/6th of the Johns Hopkins Lancet II study.

Last week came yet another study and calculation from The Iraq Family Health Survey Study Group in association with the World Health Organization reported in the New England Journal of Medicine. This group used similar interview techniques of family clusters but with a larger population and better controlled methods, and arrived at an estimated excess civilian death total of 151 000 (+ – 50 000), for the same period of 2003-2006. [I cannot comment on this study because I am not familiar with its methods, nor had the time to become so before this writing. I will take it at face value despite its exceeding my armchair estimate by at least 3 times. I must allow that the NEJM study is more accurate than my armchair estimate. ]

Why the differences among estimates? The question comes down mainly to, why was the Lancet II so far off from other estimates? The Iraq Body Count was made by collecting weekly deaths reported in news reports, in a similar way to my first armchair estimates (see below) but from documented news reports. The most recent report took lessons from the errors of Lancet I and II and tried to improve on the methods. Evidence of problems (Lancet II, 2006) – from errors to possible fakery: [Much of the source for this part of the report is from the Jan. 8 issue of the National Journal.]

First, the investigators included deaths from the July 6, 2006 Sadr City car bombing with 60 deaths when the study was to end June 30. Included in the base figure, the multiplier erroneously magnified the result.

Next, reviewers found a lack of normal or expected distribution of deaths in neighborhoods – instead of more closely clustered deaths, they found them distributed evenly among large numbers of homes in areas where there were single, large explosions. This suggested “curbstoning” or creating data (described later.)

Oversight. To undertake the first Lancet study, according to National Journal, “investigator Les Roberts went into Iraq concealed on the floor of an SUV with $20,000 in cash stuffed into his money belt and shoes. Daring stuff, to be sure, but just eight days after arriving, Roberts witnessed the police detaining two surveyors who had questioned the governor’s household in a Sadr-dominated town. Roberts subsequently remained in a hotel until the survey was completed. Thus, most of the oversight for Lancet I — and all of it for Lancet II — was done long-distance. For this reason, although he defends the methodology, Garfield [an investigator with Lancet I] took his name off Lancet II. ‘The study in 2006 suffered because Les was running for Congress and wasn’t directly supervising the work as he had done in 2004,’ Garfield told National Journal.” [Roberts admits feeling that social activism is part of doing sociological studies.]

More incriminating is the fact that investigators have declined to open their original data to inspection and for confirmation, even to confirm that the work was done as stated. This violation of scientific ethics they explained away by claiming possible threats to the lives of the data gatherers and interviewees. Evidence for falsification mounted.

Funding: Most funding for both Lancet I and Lancet II came from J Tirman, professor of political science at MIT, using funds from George Soros’s Open Society Institute. plus other funding. Soros is the now-famous billionaire political backer of Code Pink, anti-commercial street demonstrations, anti-US policies and opponent of the current administration. The idea of studying American-caused Iraqi civilian deaths has itself a probable political genesis.

Rationale: According to the National Journal, the Lancet Editor explains away the publishing of errant reports: […Horton shares a fundamental faith in scientists]. He told NJ that scientists, including Lafta [the major Iraqi physician data gatherer,] can be trusted because ‘science is a global culture that operates by a set of norms and standards that are truly international, that do not vary by culture or religion. That’s one of the beautiful aspects of science — it unifies cultures, not divides them.’ The NJ authors comment: “The authors refused to provide anyone with the underlying data, according to David Kane, a statistician and a fellow at the Institute for Quantitative Social Statistics at Harvard University.” Some critics have wondered whether the Iraqi researchers engaged in a practice known as “curb-stoning,” sitting on a curb and filling out the forms to reach a desired result. Another possibility is that the teams went primarily into neighborhoods controlled by anti-American militias and were steered to homes that would provide information about the ‘crimes‘ committed by the Americans.

Fritz Scheuren, vice president for statistics at the National Opinion Research Center and a past president of the American Statistical Association, said, ‘They failed to do any of the [routine] things to prevent fabrication.’”

Estimating from the armchair. When I first saw the publicity over Lancet II, I did a quickie estimate of my own, using what I read from WW II intelligence gathering. The Allies estimated German military unit troop strength from home town newspaper announcements of local soldier furloughs – both the frequency and length – from knowledge of their location and the constancy of strength necessary for combat readiness. From US Newspaper summaries, one could estimate 100-200 civilian deaths per week. At that rate, one would expect a death rate of 5 000-10 000 per year, or somewhere between 17 000 and 35 000 deaths in the 3.25 years covered by Lancet II. That estimate was more than one order of magnitude less than that of Lancet II. The Iraq Body Count group in fact estimated a number in the same order of magnitude, 4000-5000 – one tenth of the Johns Hopkins- Lancet II calculation.

Another way of looking at the data is to estimate the “normal” death rate. The Lancet study authors did that and concluded that their 650K death total was “excess” or over that normally expected. But for the record, Iraq, with a population of 27 million, and an estimated life span of 60 years after rounding off (30million at 60 years) would have 500K peacetime deaths/year. The war related deaths, if truly 600K over 3.25 years, or 200k/year, would have been just over 500 deaths per day in addition to the baseline normal 1300 deaths/day.

Critics found yet another figure that I thought of also, after deciding to write this. US battlefield ratio of wounded individuals to deaths is 8-10:1. Assuming 10:1 ratio, there would have been 6 million wounded Iraqi civilians seeking medical care over 3 years, or 10,000 wounded per day. No hospital or other source data could reasonably have confirmed such figures.

These little numbers games are more than just a show and tell for a skeptical blogger. A simple set of thought experiments, using sixth grade arithmetic, should have given the Lancet editors and reviewers an easy exit strategy from publication.

So what’s with the editors? The investigators have already admitted their political biases and even admitted that a condition of publication of both Lancet I and II was that they be put on a fast track and be published immediately before the US elections of 2004 and 2006.

Thus there must have been collusion between the investigators and the editors to timely publication in an attempt to affect the US elections.

This political move in a scientific publication was not new, of course. The second 1990s presidential election during the “intern scandal” was the presumed target of the fast-tracking of a long-lingering JAMA manuscript on teen-age perception of the meaning of “sexual relations.” Whether true or not, the publication resulted in the firing of the editor.

There has been remarkably little commentary on The Lancet editors’ actions, even in the US. This has probably been in deference to current political temperaments, frustrations with the Iraq war, and the current unpopular presidency.

The current editor deflected responsibility to the Johns Hopkins authors, with the following comments, also quoted in National Journal: “…science is a global culture that operates by a set of norms and standards that are truly international, that do not vary by culture or religion. That’s one of the beautiful aspects of science — it unifies cultures, not divides them…

“…The possibility of fakery… ‘did not come up in peer review. Medical journals can’t afford to repeat every scientific study…because ‘if for every paper we published we had to think, ‘Is this fraud?’ … honestly, we would fold tomorrow.’”

But the editor answers a question not asked. There are ways to detect or at least to suspect misrepresentation as well as error. In 2003 the US National Institutes of Health sponsored a two-day international conference on scientific fraud. The Lancet’s editor was not only there, he was the keynote speaker.

The simple math estimates above were evident had anyone “asked,” especially when the figures reported were so excessive. But the reported figures probably fit well with the editors’ political mind-set – an anti-US expression common in some quarters of the UK. The editor’s political sentiments are recorded in a segment of You tube.

As for the intent of the publication, to affect the two US elections, there can be little doubt that the 2006 election was a referendum on US policy in Iraq, that the widespread publicity affected US public opinion. The 600,000 figure continues to be quoted in political speeches and tracts despite attempts at correction.

In either case – whether from blindness or complicity, the editorial staff of Lancet owe the scientific community a better explanation and a request for retraction of the Lancet II paper. There is no formal method for initiating or carrying out those activities.

Thanks to National Journal for its thorough reporting.

Posted in: General, Politics and Regulation

Leave a Comment (44) ↓

44 thoughts on “The Iraqi Civilian War Dead Scandal

  1. nixar says:

    Why the differences among estimates, you ask?
    It’s strange that you should fault the authors of the Lancet study for this difference, while even the authors of Iraq Body Count acknowledge that their count has to be much lower than the reality: they only count, as you indicate, casualties mentioned in news reports. Problem is, there is virtually no news reporting done from outside the green zone or the Kurdish regions of Irak, because it would be a death wish for a journalist to go outside.
    Furthermore, even if journalists were to magically be able to do their work, they most certainly wouldn’t report in any statistically meaningful way surnumerous non-violent deaths indirectly caused by the war.
    With this in mind, would you care to guestimate what proportion of actual casualties are reporting? I don’t know, but 1 in 6 looks reasonable to me.
    This doesn’t mean that I’m giving the authors of the Lancet study a pass. I don’t have the qualifications to analyze their work. Your critics and the National Journal’s may be valid; I’d be very interested to read the studies’ authors rebuttal.
    My point is that your starting argument, that there is somehow a huge discrepancy, is fallacious.
    You seem to question the character of various people here, including the Lancet’s editor. Well, at least his ethics. What’s interesting to me is that you’re questioning the only thing that at least tries to be a scientific study; yet nowhere do you seem to regret that there wasn’t more studies done. You’d think that the US gov’t, with the hundreds of billions of $ it’s spending waging this war, could at least scrounge up a few millions to study the impact of its actions. But no, they don’t, and even president Bush admitted it, “we don’t count.” It’s easier to just blanket deny inconvenient datas.
    So whose ethics is questionable here?

  2. TimLambert says:

    I think the National Journal piece was a dreadful piece of reporting. I explain why here.

    The IFHS survey should be taken seriously, but because many areas were too dangerous to survey, the effective sample size wasn’t any larger than the second Lancet survey.

  3. apteryx says:

    Dr. Sampson writes:

    “Critics found yet another figure that I thought of also, after deciding to write this. US battlefield ratio of wounded individuals to deaths is 8-10:1. Assuming 10:1 ratio, there would have been 6 million wounded Iraqi civilians seeking medical care over 3 years, or 10,000 wounded per day. No hospital or other source data could reasonably have confirmed such figures.”

    He refers to this as a thought experiment using sixth-grade arithmetic, but I would hope that an intelligent sixth-grader would have the logical skills to reject the assumption that the Iraqi ratio of deaths to injuries would be as low as – or lower than! – the American ratio. A few reasons:

    1. Iraqi civilians are being attacked with more deadly weapons. Tanks, warplanes, rifles in expert hands, or suicide bombers in close proximity all tend to do more damage than a homemade roadside bomb.

    2. American soldiers wearing full body armor, often riding in armored vehicles, are less likely to be fatally injured in an attack than are kids wearing T-shirts standing in a pet market, or pregnant women riding in taxis near checkpoints.

    3. American soldiers get much better medical care when wounded. The American ratio of injuries to deaths is inflated by the fact that U.S. medics save many wounded soldiers who would have died in previous wars. (Their eventual condition is another question.) With shortages of power and medical supplies, and lesser training in trauma surgery, Iraqi hospitals will not save so many.

    4. That ratio is also inflated by the fact that every American who is so much as scratched is treated and reported as injured, while the limited health care system and economic hardship may mean that an Iraqi who suffers a very minor injury might not seek medical care for it.

    5. The most enormous and obvious point of all: Many of the Iraqis killed are captive victims. Local terrorist groups have formed death squads that kidnap and kill dozens of people at a time, whose bodies are regularly found dumped in mass graves. Does each of those incidents also supply hundreds of wounded people? No, because the killers had the victims in their power, and did not allow any to escape alive. The same, of course, must be said of the people who were killed in house raids or beaten or tortured to death while in American or Iraqi government custody. No significant number of American deaths occur in comparable circumstances .

    Dr. Sampson likes the newspaper figures of civilian deaths (because they are low, due to underreporting for well known reasons); why didn’t he test this suggestion by looking at the newspaper’s reported death AND injury figures for a period of time to see if his happy 10 to 1 ratio holds up? When 25 guys are found in the desert shot in the back of the head, we’ve already established there are 0 wounded. When some terrorist slimebag bombs a pet market and kills 150 people, do the papers report 1500 wounded? When we bomb a house and kill 8 women and children, do the papers report 80 wounded?

  4. jayh says:

    While it’s good to get another view, some of apteryx objections are themselves problematic.

    “1. Iraqi civilians are being attacked with more deadly weapons. Tanks, warplanes, rifles in expert hands, or suicide bombers in close proximity all tend to do more damage than a homemade roadside bomb.”

    This is an assumption. Suicide bombers kill a number of people but injure FAR more (this is in the statistics). Basically a random distribution of injuries will produce far more non immediately fatal than fatal.

    “2. American soldiers wearing full body armor, often riding in armored vehicles, are less likely to be fatally injured in an attack than are kids wearing T-shirts standing in a pet market, or pregnant women riding in taxis near checkpoints.

    This changes total numbers of killed and injured, not necessarily their relative distribution.

    ” American soldiers get much better medical care when wounded. The American ratio of injuries to deaths is inflated by the fact that U.S. medics save many wounded soldiers who would have died in previous wars”

    The eventual numbers saved is statistically irrelevant. The number being initially treated while still alive is really what is being measured by hospital records, regardless of whether they eventually die.

    “American who is so much as scratched is treated and reported as injured,”
    Not really.

    ” Many of the Iraqis killed are captive victims. Local terrorist groups have formed death squads that kidnap and kill dozens of people at a time, whose bodies ”

    One would need to document the claim that such a large proportion of the fatalities are captives.

  5. nixar says:

    I forgot to add, your reference to Mr Soros reeks of Rush Limbaugh-style “reporting.” I didn’t expect this level of wingnuttery when Dr. Novella announced this blog.

  6. Harriet Hall says:

    I wonder, would the Lancet have accepted a study with similar methodological flaws if it had originated in an obscure African country instead of in a political hot spot?

    What is the point of medical journals’ publishing such data in the first place? Maybe I’m obtuse, but I don’t see why numbers of war dead should be considered a medical scientific issue. If they were gathering statistics on people needing health care to plan for future public health projects, that would be a different matter.

  7. Wallace Sampson says:

    Thank you, readers for givng serious thought and analysis to the article, and for rasing some good oints for discussion..

    I suggest the following generic questions and a generic answer to the first set.

    If the difficulties in surveying and collating data in Iraq are so difficult, why are individual studies with unusual results published so readily in a first line medical journal? If you or I were to submit an article with as many data insecurities on drug action or an operative procedure, would the manuscript even get sent out for review? The answers are political, and no.

    Next, what is the justification for publication of medical articles intentionally timed to US national elections? Answer is political.

    I made no claim to extraordinary accuracy from the armchair. I do claim two points as take-home:
    1) There are simple ways for reviewers and editors to suspect inaccuracy and fabrication…in this case initially to inaccuracy. The authors admitted their intent publicly post hoc. Errors and fabrication of various forms are rife in pseudoscience and sectarian medicine articles. Watch out.
    2) Politics has no place in awarding of research grants, medical education, or publication. The NCCAM in spades.

    And thanks to Steve Novella for some last-minute editing of my own errors.

    WSampson

  8. Wallace Sampson says:

    To Nixar:

    The point of the article is the inappropriate mixing of politics with scientific medicine. That is not “wingnuttery.” It is calling attention to the politically motivated distortion of medical data resulting from such meddling for non-scientific, non-medical, and openly political purposes. The researchers, authors, and editors have already admitted their roles publicly. The MIT grant source admitted Soros as the source of half the funds. Will “Mr. Soros”?

    Burnham admits timimg to the election
    http://www.youtube.com/watch?v=pMlAcHKFc7w&feature=related

    Editor “motivational speech”
    http://www.youtube.com/watch?v=v7BzM5mxN5U&NR=1

    If these don’t work, just plug into the You Tube search box, Burnham or R Horton .

    The questions illustrate how deely imbedded are some of the controlling attitudes of the day.

    WS

  9. Alexandra says:

    ” “2. American soldiers wearing full body armor, often riding in armored vehicles, are less likely to be fatally injured in an attack”

    This changes total numbers of killed and injured, not necessarily their relative distribution. ”

    What exactly do you base that upon? Armouring the trunk and head while leaving the extremities more exposed is intended specifically to limit the incidence of non-survivable injuries. With rapid access to first-aid and medical attention (such as soldiers enjoy) non-compressible hemorrhage is the single most common component of non-survivable injuries. By armouring where we do, we almost completely eliminate that component. That element alone causes a major shift in killed v injured ratio, which, of course, is why we do it.

    Put simplistically, you don’t die if you get shot in the shin, you die if you get shot in the liver. That’s why we put the SAPI plates over the liver, not over the shins.

    ”The eventual numbers saved is statistically irrelevant. The number being initially treated while still alive is really what is being measured by hospital records, regardless of whether they eventually die.”

    That is not the case. Primarily because we are not just dealing with “hospital records” here, and where we are we are dealing with summaries of those records. Whether a patient eventually lives or dies from a given wound is very much a factor of the immediacy and efficacy of the medical attention they receive. Nobody, I am sure, would even suggest that US soldiers are receiving a level of medical care no better than what the average Iraqi in rural Iraq can obtain. Obviously this significantly impacts on survival rates among the injured, and this, of course, is exactly why US soldiers are provided with such extensive medical support. You are, perhaps correct in one way though. The study in question is reporting eventual fatalities. Perhaps other studies are, as you suggest, relying on more immediate reports and counting as merely injured those who subsequently die. That, however, would tend to justify these higher numbers, not refute them.

    “One would need to document the claim that such a large proportion of the fatalities are captives.”

    Actually, no. Not when responding to undocumented seat-of-the-pants armchair objections to an actual in-country first person study. “Perhaps the numbers don’t match your preconceptions because…” doesn’t need documentation. The fact that anyone bothers to address such silly “objections” at all is generous enough.

    The whole purpose of science is to obviate “it seems to me” in our examination of things. If things were how they seemed there would be no need for the scientific method. The “objections” presented here are clearly pseudo-scientific excuses for why the gathered data don’t support Sampson’s personal preconceptions. I am deeply disappointed to see such flagrant anti-science presented here.

  10. Wallace Sampson says:

    Sorry, this may be the better source for Dr. Burnham’s admission of timing.

    http://www.youtube.com/watch?v=pMlAcHKFc7w&feature=related

    WS

  11. I do want to emphasize for our readers that the point of this post is not to discuss the politics of the Iraq war or even to debate the numbers of Iraqi civilians dead, but rather the specific behavior of the Lancet editors, as Lancet is a respected medical scientific journal. That is really the only issue of interest to SBM.

    The SBM blog is apolitical (although of course we recognize that as individuals we are not, and we can only do our best to self-police for political bias).

  12. David Gorski says:

    If the difficulties in surveying and collating data in Iraq are so difficult, why are individual studies with unusual results published so readily in a first line medical journal?

    Actually, I would respond to that with another question: If the difficulties in surveying data with regards to any epidemiological study are so difficult, why are they published so readily in a first line medical journal? While the difficulties of gathering epidemiological data in a war zone are extensive, they are probably no more so, except for the physical danger involved, than the difficulties gathering epidemiological data in Third World countries or many other situations that I can think of. (Epidemiology is hard, and it’s very hard to come up with a valid sampling method and to control for confounding factors.) I’m sorry, but that particular criticism of yours is not particularly compelling. I would also strongly disagree with your use of the term “scandal” in the title. This was not a scandal. (A controversy, yes. A scandal, no.) Moreover the criticisms you level have been answered extensively. See:

    1. Tim Lambert’s frequent discussions dating back to the first study in 2004, which address many of the complaints, as well as his more recent discussions of the 2006 study. Note also his detailed critique of the National Journal article you cited.

    2. “Revere” at Effect Measure has also discussed this. (Note: Revere is a very senior, very respected epidemiologist; I know who he is.)

    3. Mike Dunford’a analysis.

    4. Mark Chu-Carroll at Good Math, Bad Math has commented twice (here and here).

    It’s not that I’m a big fan of the Lancet study, and indeed not all the commentary I referenced above was positive. Clearly it has some methodological shortcomings, but most of those were a direct result of the situation they were investigating and the danger to investigators in a war zone. However, even though I do think the study probably does overestimate excess mortality compared to other studies, I think you exaggerate its shortcomings, and I do not agree with you that these shortcomings rise to the level where calling for a retraction is justified. That being said, even then I would probably not have taken you to task if you had simply limited yourself to criticism of the editors for publishing such an article so close to an election. Clearly that was a dodgy issue. However, that wasn’t enough. You had to try to tear down the study itself to make it look as though not only were the editors biased but they were so biased that they would rush the publication of a crap study to get it out before the election. And what about the peer reviewers who reviewed the paper?

    Another point is this: How far off do you think the Lancet study is? How much, in your opinion, did it overestimate the death toll? Two times? That would still be 300,000 excess deaths. Five times? that would still be 120,000 excess deaths. Ten times? That would still be 60,000 excess deaths. Those are all still very big numbers. Indeed, the New England Journal of Medicine study that you cited estimated 151,000 (95% uncertainty range: 104,000 to 223,000). If that one’s a better estimate, again, that’s still a huge number, and is almost certainly much better than your “armchair estimate,” which, as much as it pains me to have to say it given how much I’ve always admired your work, was a poor argument against the validity of the Lancet study.

    Finally, my jaw dropped when you said:

    There has been remarkably little commentary on The Lancet editors’ actions, even in the US. This has probably been in deference to current political temperaments, frustrations with the Iraq war, and the current unpopular presidency.

    I’m not sure where you got that impression from, but from the very moment the study was released, it was extensively discussed in many, many commentaries, on TV, on blogs, in newspapers and magazines, with predictable divisions: Right wing supporters of the war castigating the study and the editors of The Lancet for publishing it and left wing blogs like the Daily Kos defending them and holding the study up as a reason why the war was wrong. Google it if you don’t believe me. I got 55,800 hits from a fairly restrictive search. I also remember the rather nasty debate that went on over the study in all media types.

    The bottom line is that most of your criticisms of the study are overblown. Again, it pains me to have to say this. You’re far stronger on questioning why the editors chose to publish the before an election, but that strength evaporated due to the verbiage you spent on rather weak attacks on the study’s methodology, reference to a rather obviously biased National Journal article, and playing the George Soros bogeyman card.

  13. David Gorski says:

    I do want to emphasize for our readers that the point of this post is not to discuss the politics of the Iraq war or even to debate the numbers of Iraqi civilians dead, but rather the specific behavior of the Lancet editors, as Lancet is a respected medical scientific journal. That is really the only issue of interest to SBM.

    You must have posted this as I was writing my last comment.

    My response would be: I would tend to agree, except that Dr. Sampson clearly didn’t limit himself to discussing what the editors did. He also strongly criticized the methodology and conclusions of the Lancet study itself. Indeed, in the very first paragraph, he cited the study for “faulty epidemiology” and in the second paragraph stated that his post was about how “editors contribute to fabrication, accepting or refusing to recognize fraud and misinformation.” Whom, then, did he think guilty of fraud an misinformation, if not the study authors? Consistent with that opening broadside, Dr. Sampson then dedicated more verbiage to criticizing the study than he did to discussing the specific behavior of the editors in publishing the study. In my mind, that makes his criticisms of the study more than fair game for discussion. He opened the door; I just walked through it.

  14. nixar says:

    “The point of the article is the inappropriate mixing of politics with scientific medicine. That is not “wingnuttery.” It is calling attention to the politically motivated distortion of medical data resulting from such meddling for non-scientific, non-medical, and openly political purposes. ”
    Mentioning G. Soros the way you did is wingnuttery. It does not add anything to the discussion, unless, for example, you were to point to instances where Mr Soros was caught unduly interfering with other initiatives he funded. He hasn’t, despite all trolling at Fox News or the Weekly Standard.
    As for the timing, I remember hearing the author of the study mentioning this on Democracy Now IIRC, and indeed, they rushed somewhat to produce it in time for the election, because, well, what do you know, war and its consequences are a political issue. And voters should be informed to make an informed choice, duh! Same is true for global warming and many other issues.
    So what we have here is basically your usual shooting of the messenger. I have to ask here, if those people are SOOOO biased because, gasp, they seem to have an irrational dislike for war, what’s your bias here, why are you so fond of those rightwing talking points? I would normally assume good faith on your part, but since your article and the responses you’ve posted do not, I don’t feel I have to.

  15. apteryx says:

    Hey, we’ve all learned from other threads that epidemiological studies are not to be trusted (unless they show that statins prevent cancer … just kidding, guys, stand down!). This one has flaws. The studies with low numbers have flaws too, such as the surveyors being unable to enter neighborhoods that are infested with death squads or being quarantined and destroyed from the air by the U.S. So, I have no position on what the “truth” is, except to say that we will never have an exact accounting.

    However, the fact that these numerous deaths are due to homicide rather than disease certainly does not mean that it is not a fit subject for a medical journal. The profession has previously presumed to investigate plenty of “health issues” that are more political than biological, ranging from “gun violence” to violent video games. As for the desire of involved parties to publish the paper before the American election: I should hope so. As an American voter, I would hate to vote for the incumbents and learn weeks afterward that they had killed and precipitated the deaths of far more innocent foreigners than we had been allowed to believe. The New York Times took the opposite approach of deliberately withholding negative facts about the Bush administration until after the election, to avoid depressing his vote totals, and were rightly excoriated for it.

  16. TimLambert says:

    I would have hoped that the scientific question of how many deaths had occurred as a result of the war was the more interesting one, but if the post is really supposed to be about the conduct of the editors of the Lancet, let me comment on that.

    First:

    The investigators have already admitted their political biases and even admitted that a condition of publication of both Lancet I and II was that they be put on a fast track and be published immediately before the US elections of 2004 and 2006.

    There is an error here. Lancet I was placed on a fast track and the authors did want it published before the election. Lancet II was not. I see nothing wrong with this and I’m surprised that Sampson does. Does he think it is better that voters be ignorant rather than informed? And yes, the investigators were anti-war. For some reason, scientists who study the effects of war, generally come to be anti-war. I don’t think that means that journals should refuse to publish their work.

    Second:

    These little numbers games are more than just a show and tell for a skeptical blogger. A simple set of thought experiments, using sixth grade arithmetic, should have given the Lancet editors and reviewers an easy exit strategy from publication.

    This reminds me of the simple thought experiments conducted by HIV/AIDS skeptics, global warming skeptics, ozone hole skeptics etc that “prove” that scientists publishing in peer reviewed journals are obviously wrong and should not have been published. If it’s that simple, why didn’t the experts think of it? The answer is, of course, is that they did think of it and have good reasons to dismiss them, and previous commenters have given them.

    Finally, if the biases of authors are so important, shouldn’t Sampson have mentioned the bias of Neil Munro, who wrote the National Journal piece he relied on? Munro was and is a strident advocate of the war on Iraq, predicting that the result of the war would be this:

    The painful images of starving Iraqi children will be replaced by alluring Baghdad city lights, smiling wages-earners and Palestinian job seekers.

  17. TimLambert says:

    Harriet Hall asks:

    I wonder, would the Lancet have accepted a study with similar methodological flaws if it had originated in an obscure African country instead of in a political hot spot?

    Mortality in the Democratic Republic of Congo: a nationwide survey The Lancet 2006; 367:44-51

    Summary
    Background

    Commencing in 1998, the war in the Democratic Republic of Congo has been a humanitarian disaster, but has drawn little response from the international community. To document rates and trends in mortality and provide recommendations for political and humanitarian interventions, we did a nationwide mortality survey during April–July, 2004.

    Methods

    We used a stratified three-stage, household-based cluster sampling technique. Of 511 health zones, 49 were excluded because of insecurity, and four were purposely selected to allow historical comparisons. From the remainder, probability of selection was proportional to population size. Geographical distribution and size of cluster determined how households were selected: systematic random or classic proximity sampling. Heads of households were asked about all deaths of household members during January, 2003, to April, 2004.

    Findings

    19 500 households were visited. The national crude mortality rate of 2·1 deaths per 1000 per month (95% CI 1·6–2·6) was 40% higher than the sub-Saharan regional level (1·5), corresponding to 600 000 more deaths than would be expected during the recall period and 38 000 excess deaths per month. Total death toll from the conflict (1998–2004) was estimated to be 3·9 million. Mortality rate was higher in unstable eastern provinces, showing the effect of insecurity. Most deaths were from easily preventable and treatable illnesses rather than violence. Regression analysis suggested that if the effects of violence were removed, all-cause mortality could fall to almost normal rates.

    Interpretation

    The conflict in the Democratic Republic of Congo remains the world’s deadliest humanitarian crisis. To save lives, improvements in security and increased humanitarian assistance are urgently needed.

    Why hasn’t this Lancet study been criticised the way the ones on Iraq have been? I’m pretty sure the authors of this study oppose the war in the Congo. And the International Rescue Committee funded it and Soros has donated money to the IRC.

  18. joel_grant says:

    “The Iraqi Civilian War Dead Scandal” is that they are.

  19. Harriet Hall says:

    TimLambert,
    Thanks for the Congo reference. It’s reassuring to know Lancet has accepted similar articles about other parts of the world. I’m left wondering whether this study had similar methodological flaws and whether those flaws would have been pointed out by now if it were an area of political interest.

    Most of all, I’m still wondering: why are medical journals doing these studies in the first place? I just can’t envision this as a legitimate medical scientific issue. Shouldn’t those studies be done by political bodies, social scientists, etc.? The Congo article concluded “if the effects of violence were removed, all-cause mortality could fall to almost normal rates.” War is deplorable, but it is not a disease that can be treated by medical science.

    Maybe there is a rationale for publishing these studies in medical journals that I just haven’t thought of. Wouldn’t it be better for journals to stay out of controversial areas like gun control and war? Even when they are objective, they are likely to be accused of political bias.

  20. weing says:

    Big deal, the Lancet published a BS study in order to sway an election. Don’t believe everything you read. Use your brain. If you are against the war and look upon the US as the great satan worthy of world contempt, you will find your biases confirmed. If you are not against it, then your biases will be challenged. You could ask the Pentagon to give you a study showing that only 1000 people died. Doubt the Lancet would publish it. Politics is always trying to trump science.

  21. David Gorski says:

    Most of all, I’m still wondering: why are medical journals doing these studies in the first place? I just can’t envision this as a legitimate medical scientific issue. Shouldn’t those studies be done by political bodies, social scientists, etc.? The Congo article concluded “if the effects of violence were removed, all-cause mortality could fall to almost normal rates.” War is deplorable, but it is not a disease that can be treated by medical science.

    But epidemiology is a medical specialty, and the epidemiology of violent death is a prerequisite for developing policy. Moreover, trauma is a medical specialty as well. Wouldn’t you rather have epidemiologists applying the most rigorous statistics that they can to these sorts of questions rather than social scientists or, worst of all, political bodies, who would be far more motivated to put a political spin one way or the other on their studies than even the worst that the authors of Lancet I and Lancet II have been accused of.

    Moreover, the epidemiology of violent death is published in medical journals all the time regarding, for instance, murder and assault in urban areas and the factors that predispose to them. No, I must disagree here. There is nothing inappropriate about medicine studying these issues. I would also disagree that we should stay out of politically charged questions. Would you say the same thing about abortion, for instance? Or teen pregnancy? Or drug use? All of these questions have politically contentious policy implications, but no one seems to object to studies about them being published in medical journals. Why the problem with the Iraq study? Because, whether its conclusions were way off or not (and I do note that it is at least within an order of magnitude of the NEJM study), it said something that certain political segments of our nation most definitely did not like at all.

  22. Harriet Hall says:

    I can understand studying the epidemiology of crime and its predisposing factors; I can see how the information learned might be used in prevention efforts. I would feel better about counting war dead if I could envision any way that having a more accurate body count could contribute to preventing future war deaths.

    I think abortion, teen pregnancy and drug use are all legitimate medical concerns. We can learn about the effects on the body, the effectiveness of various methods of prevention and treatment, etc. But science can only provide data; it can’t make value judgments or dictate policy.

    Sometimes the researchers’ prior beliefs determine what they choose to study and how they choose to study it. There’s no way to prevent that, but the more we can avoid even the appearance of bias, the better for science. If authors are asked to disclose their connections with drug companies, maybe the authors of studies on controversial topics should be asked to disclose their political and religious affiliations.

  23. Aaron S. says:

    What the heck was the pre-invasion death rate? I am having trouble getting a good number. I am seeing figures from 5% to 9.7%…

  24. Aaron S. says:

    “What the heck was the pre-invasion death rate? I am having trouble getting a good number. I am seeing figures from 5% to 9.7%…”

    Opps, I mean /1000 (capita), not %

  25. weing says:

    I don’t believe we can avoid bias. We should recognize it though. Avoiding the appearance of bias sounds like deception, self or otherwise.

  26. Freddy the Pig says:

    I find it interesting that the main contributers to this blog are in such disagreement with each other on this topic. Quite a contrast to the stereotype of “Establishment Groupthink” that the alies like to accuse you of.

  27. David Gorski says:

    Actually, I appear to be in the minority among my co-bloggers on this comment thread, although I can’t help but note that Kimball and Steve, being probably wiser than I, have refrained from weighing in on this particular topic. ;-)

    In any case, we most definitely are not monolithic. In fact, in the couple of weeks prior to the launch of this blog, we had some rather vigorous “back channel” discussions about various aspects of evidence-based medicine, particularly the role of scientific prior probability, as Kimball discussed in his Feb. 8 post. Quite frankly, it really did pain me to have to be so critical of this post, as I’ve always admired Wally’s work and stand against the infiltration of nonscientific medicine that has occurred over the last couple of decades (well, at least since I first discovered him several years ago). However, just because a post was written by one of my co-bloggers or by someone I admire doesn’t mean I’m going to stay quiet when I have a serious disagreement with it. I’m sure Wally will be happy to return the favor some day.

    I wouldn’t have it any other way. The day I think we’re starting to succumb to groupthink is the day I leave this blog.

  28. joel_grant says:

    Dr. Hall writes:

    “I would feel better about counting war dead if I could envision any way that having a more accurate body count could contribute to preventing future war deaths. ”

    If your standard is “envisioning” “any way” that getting an accurate count of civilian dead could help reduce future war deaths, here are a few ways that I can envision.

    But first, although it has been frequently mentioned, I have not seen much discussion about the implications of the fact that the Lancet study looked at mortality rates rather than trying to count bombed, shrapneled and bullet-ridden bodies.

    I assume this means that for the purposes of the study, a person who dies from a non-war injury or illness because the local hospital has shut down would be represent “excess” mortality.

    As would people who die because the infrastructure has been destroyed and there are no working sewers, the water is contaminated, etc.

    This would lead us to suggest that in the future, if you decide to invade another country, try not to destroy its infrastructure in the process.

    If large numbers of civilians are being killed because the attacking forces are dropping lots of bombs in residential neighborhoods, that would suggest that good public health policy would be to avoid dropping bombs in civilian neighborhoods.

    The use of banned chemical weapons in Fallujah would be another example that epidemiological studies might help to avoid.

    Not that we should need medical studies to tell us that it is wrong to throw phosphorus shells into cities, but seeing what this does might make at least a few more of us think twice before doing it again.

    We can all think of ways to reduce civilian carnage can we not? But I think the main benefit of such studies is to let everyone know the consequences of the behaviour.

    The best way to deal with an epidemic of civilian war dead is to not start wars.

    Actually, it seems to me that there are really two main issues in this thread.

    The first is technical: how good is the study?

    The second is moral: is it a good thing to know how many civilians die in an invasion and its subsequent occupation phase? If so, whose job is it to count them?

    Our own government has gone out of its way to not count them. Who, then, has the job?

    Since Lancet has a methodology for estimating excess mortality due to human conflict, they seem like a pretty good fit to me.

    Next, we should have a comprehensive study of the ongoing Iraqi refugess crisis, the effects of the ethnic cleansing – Iraq provides a fertile ground for death, dismemberment, infrastructure damage, refugee crises, etc. etc.

    If we learn from none of this (an unsurprising outcome) we really will have studied this in vain.

  29. skidoo says:

    My takeaway from this blog post and the follow-up comments:

    1. The Lancet studies were seriously flawed, objectively speaking (nevermind political bias).

    2. Lancet 1 was aggressively used to influence public policy, despite the fact that it was objectively flawed.

    3. Epidemiology is hard.

    4. The Lancet editor(s) should be taken to the woodshed for their sloppy review. “Well, lots of epidemiological studies are questionable” is no excuse.

    5. Furthermore, the aggressive promotion (e.g. the rush-to-print) of such a flawed study does a great disservice to the credibility of the Lancet specifically and the scientific community in general (whether they allowed themselves to be manipulated or were directly complicit is irrelevant).

    6. This blog post is a mix of scientific examination and rhetorical cynicism. It seems obvious to me that the Soros connection was an example of the latter. Of course Soros’ funding does not invalidate the Lancet studies. Sampson never asserted that. But neither does raising the Soros “boogeyman” invalidate the objective components of Sampson’s analysis.

    7. I’m glad for the politics prohibition on SBS, and while this post may have wandered close to the line separating medical review from political polemics, it didn’t cross it. All of the posts here contain a certain amount of rhetorical flair. Which is a good thing.

    Of course, while I consider myself a scientific skeptic, I’m no scientist. So what do I know? :-)

  30. skidoo says:

    I said:

    I’m glad for the politics prohibition on SBS…

    When what I obviously meant was:

    I’m glad for the politics prohibition on SBM

    Gak!

  31. joel_grant says:

    My takeaway on this subject is that the critics of the study do a worse job as critics than Birnham and Roberts did in the study.

    For instance, the false claim that George Soros funded the study (as if that was relevant anyways) is tackled by John Tirman, who commissioned the study; in fact, Tirman told the “National Review” authors the facts, which they chose to ignore. Writes Tirman:

    “Let me convey what I thought was a simple and unremarkable fact I told Munro in an interview in November and one of the Lancet authors emailed Cannon the details of how the survey was funded. My center at MIT used internal funds to underwrite the survey. More than six months after the survey was commissioned, the Open Society Institute, the charitable foundation begun by Soros, provided a grant to support public education efforts of the issue. We used that to pay for some travel for lectures, a web site, and so on.

    OSI, much less Soros himself (who likely was not even aware of this small grant), had nothing to do with the origination, conduct, or results of the survey. The researchers and authors did not know OSI, among other donors, had contributed. And we had hoped the survey’s findings would appear earlier in the year but were impeded by the violence in Iraq. All of this was told repeatedly to Munro and Cannon, but they choose to falsify the story. Charges of political timing were especially ludicrous, because we started more than a year before the 2006 election and tried to do the survey as quickly as possible. It was published when the data were ready.”

    There must be plenty of room for criticism of any such study. How do we know how many people died in Rwanda in the 90′s? What about the ongoing genocide in Darfur? The horrid depradations in Cambodia’s killing fields?

    I should think we would all want the best evidence to judge the consequences of human choices on the lives and deaths of our fellow humans.

    But it seems that only when we try to “count the dead civilians” in Iraq that we encounter people who urge us to look away.

  32. weing says:

    “More than six months after the survey was commissioned, the Open Society Institute, the charitable foundation begun by Soros, provided a grant to support public education efforts of the issue.”

    What issue?

  33. Harriet Hall says:

    I don’t think anyone has urged us to “look away” from the tragedies of war. Shouldn’t we oppose war just as strongly when 100 people die as when 100,000 die? Even one death should be too many. This body counting seems to me to diminish the human tragedy by suggesting that some number higher than one is too many.

  34. David Gorski says:

    Why and how does body counting “diminish the tragedy” in this case? Did finding out that the death toll of the World Trade Center attacks, for example, was much lower than the initial estimates of 10,000 or more diminish that tragedy? No. Would it have increased the tragedy if the initial estimates of 10,000 or more had turned out to be correct? Maybe.

    In any case, as crass and uncomfortable as it may be, “body counts” do matter. Without denigrating the tragedy of one death, there’s a big difference if 600,000 deaths can be attributed to the invasion versus 60,000. Both are horrific, but orders of magnitude do matter. From my perspective, wanting an estimate of the death toll in Iraq that is as accurate as possible in no way diminishes the tragedy of the deaths that have occurred, be they civilians or our own soldiers. Moreover, accurate estimates allow us to know the true human cost of our actions. True, numbers can numb, but when they are shockingly higher than the optimistic pre-war estimates (and even the NEJM study’s estimate is much higher than that) it teaches us a lesson. Joel is also correct when he points out that the primary reason that so many didn’t want to hear the results of the Lancet studies is that, if those studies were accurate, the U.S. was responsible for a much greater death toll than any booster of the war predicted or wanted to admit.

  35. joel_grant says:

    Dr. Hall, I hear what you are saying. It reminds me of the Dylan Thomas poem “A Refusal to Mourn the Death, By Fire, Of A Child In London”. The poet concludes: “After the first death, there is no other.”

    In either case, my remark was directed at the “National Journal” and their supporters, not at you.

    But since you have responded as you did, let me remind you of what you yourself wrote, in this string:

    “I would feel better about counting war dead if I could envision any way that having a more accurate body count could contribute to preventing future war deaths. ”

    I took “preventing” to mean ‘reduce’ rather than ‘eliminate’. If you mean ‘eliminate’ I heartily concur.

    And I am still optimistic enough about the human condition to believe that, individually and collectively, we make better choices when we can contemplate the consequences of past behavior. (and I exclude psychopaths and others who act without respect to consequences)

    I suspect we can all agree that when studies are done, we want them done as well as possible. And we want the critics to be diligent in their job.

    I am suggesting the “National Journal” did a poorer job as a critic than “Lancet” did with its study and, apparently, the study’s referees did to begin with.

    For sure, the NJ writers had the luxury of operating from their armchairs while the folks who actually did the interviews for the study were putting themselves at risk.

    A fitting metaphor for the whole situation, IMO.

  36. Harriet Hall says:

    Why and how does body counting “diminish the tragedy”? In one sense it doesn’t: of course 2 deaths are worse than 1. My point was that it is tragedy enough when one individual dies. Fixating on the higher numbers makes it seem that one death is not enough. Stopping wars is just as valid a goal when one person dies as when 100, 000 die.

    Of course it’s important to have accurate knowledge. Unfortunately, it’s difficult to do a truly objective body count because the counters are never entirely apolitical. And media reports use the numbers to support political agendas, and they tend to distort the meaning of the numbers by not putting them in perspective. If 4 people would have died without a war and 10 people died with the war, it’s more accurate to say the war caused 6 deaths, but the media are more likely to blame it for all 10.

    If I knew of a single instance when knowing exact numbers helped prevent future wars, I’d be more comfortable including body counts under the rubric of preventive medicine.

  37. David Gorski says:

    Of course it’s important to have accurate knowledge. Unfortunately, it’s difficult to do a truly objective body count because the counters are never entirely apolitical. And media reports use the numbers to support political agendas, and they tend to distort the meaning of the numbers by not putting them in perspective. If 4 people would have died without a war and 10 people died with the war, it’s more accurate to say the war caused 6 deaths, but the media are more likely to blame it for all 10.

    So what?

    All sorts of science can be used to support an agenda, political or otherwise. Studies about sexually transmitted diseases, risk factors for obesity and death, whether abortion predisposes to breast cancer (one that has been particularly abused recently). That’s not a reason to shy away from it. Why is this sort of epidemiological study the sort of thing that’s not to be done while epidemiological studies about various politically contentious issues are?

  38. Harriet Hall says:

    When something is published in a medical journal, I expect it to have some connection, however remote, to understanding and improving human health. I just can’t make that connection for body counts in war. I’ll admit that that may be just my personal lack of vision. Like I said, if I knew of a single instance when knowing exact numbers helped prevent future wars, I’d feel much better about it.

  39. MarkH says:

    I have to express my disappointment as well with this article. It reads like much of the crankery that I sort through in my RSS feed from day to day. From the anti-Soros conspiracy mongering to pretty outrageous allegations such as this:

    1) There are simple ways for reviewers and editors to suspect inaccuracy and fabrication…in this case initially to inaccuracy. The authors admitted their intent publicly post hoc. Errors and fabrication of various forms are rife in pseudoscience and sectarian medicine articles. Watch out.

    Fabrication? Really? Are you sure that is a claim you would like to make? It’s not a light one to just throw about. That and the falsification claims based on unwillingness to share raw data are bothersome. This to me is a very legitimate concern about researcher safety. From what I remember they ultimately did share their data with other researchers but wouldn’t make them fully public, is this incorrect?

    The conspiracy theories combined with your “armchair” math are disturbing. This is the kind of nonsensical analysis I would expect SBM to be challenging, not promoting. Cherry-picking inappropriate measures of casualties from other conflicts is also not particularly reassuring as other commenters have rightly pointed out.

    This article does not meet my standards for scientific analysis or writing.

  40. trrll says:

    It is also a distortion to mischaracterize the study as a “body count.” In fact, it is an epidemiological study examining the overall health consequences of the conflict. And who is more qualified to evaluate an epidemiological study of health than peer reviewers of a medical journal?

    I would expect that the medical value of understanding the health consequences of conflict to be obvious. Any kind of humanitarian assistance demands some sort of knowledge of the magnitude of the problem. And when humanitarian goals such as liberating the populace from a repressive regime are cited as a justification for invasion (as it was in this case and likely will be in the future), any kind of rational decision making must take into account the humanitarian costs as well as the benefits. I was amazed to hear Ms Hall arguing that “Stopping wars is just as valid a goal when one person dies as when 100, 000 die.” I’m used to hearing this sort of “who cares about statistics, one person with a severe side effect is too many” argument from vaccination opponents, but seems very out of place on a web site called “science based medicine.”

  41. Pingback: Deltoid

Comments are closed.