Articles

Iraq civilian deaths II: Summing up

Call me naive, but I did not expect the volume or the emotional depth of the responses to the Iraqi civilian death post. I thought many would respond to the new NEJMed survey as I did; wondering about the validity of the previous surveys and recognizing that they have a validity problem. And, that there is a question about what is printed in major journals, from unexpected sources. I did not mean that studies such as Lancet II not be printed. I stated that it should not have been printed in a first line journal for the general medical public. It could have been printed in a 2nd or 3rd line specialty journal where its methods and conclusions could have been debated and reforms shaped by colleagues. I find that hints and clues to errors in pseudoscientific reports mostly lie in the methods section. But questioning a study’s validity can involve more than just a knowledge of the methods and recalculation of the data. Because the “CAM” movement has redefined the borders of the playing field as well as the rules of the game, the entire environment of the scientific system surrounding implausible or unusual reports has to be examined – this goes beyond limits of methods, and includes motivations, funding, characters, and subtexts.

In developing criteria for estimating plausibility (prior probability) the most important criterion of course is consistency and consilience with established knowledge. But there are more. One can increase the effectiveness of investigation by using indicators not presently included in “Evidence Based Medicine” or in science, but that are used in criminology (previous arrests, convictions,) business (trustworthiness, profit vs loss,) and ideology and politics (elevation of the trivial, manipulation of the system; example: sectarian medicine.)

I will elaborate on a scientific criteria scale later. Pertinent to the Lancet papers controversy, and as asked by Harriet Hall, one may have to look at personal motivations other than just direct economic conflicts. These include previous writings – especially on philosophical, ideological subjects, funding sources (the score or more of private and public sources directed to support of sectarian medicine, and now, even political stances that can affect experimental design, timing, expression, and interpretation. Whether we like it or not, the public and legal staus of sectarian systems depends more on these non-scientific qualities than on scientific ones. Much of our critiques of the NCCAM and the academic sectarian movements already involves such extra-data and extra-scientific material. And much of that is more useful practically than the scientific stuff.

Weing, an early commenter, gave a cynical summary of the Lancet I/II and discussion, stating, don’t believe everything you read, and politics trumps science – big deal. Well, for some it is a big deal, as many missed the point of the post, which was to illuminate how ideology can affect science in unrecognized ways. The point was not that I had better ways of estimating death rates from 10,000 miles away, but that there are times when one can use simple head methods to estimate ranges (plausibility) when faced with surprising results. One can suspect when to look futher into sources of error, and more importsant, sources of potential bias. This is when ideological and political biases have to be examined, not conveniently ignored. The unique nature of “CAM” success has been in changing attitudes and ideological imagely, not merely in falsifying or manipulating data.

Lancet I and II were a real time example of the need for estimating plausibility when faced with both an extraordinary result and a paucity of basic (scientific) data on which to base an estimate. Especially when the biases and errors fall come from “our side.”

The Lancet I and II papers exemplify that prior belief molds what we study, how we study, and the conclusions we draw. And, when part of this big deal counters something we have strong feelings about, we sometimes handle the situation by mustering arguments from an army of angles, so as to reject the message, and to maintain original beliefs. So add to weing’s Big Deal, the principle of heuristics, the angles of biases, and the theory of cognitive dissonance.

All the above is fancy language for an appeal to heed clues to systematic bias. My commentary on Lancet I and II were not, nor were they intended to be scientific counter-arguments as some commenters complained. The post was to illustrate keys to recognition of bias as a source of error.

Epidemiological studies have well recognized problems of sampling and sampling error, that lead to erroneous results. A notable example of this was Wertheimer’s epidemiological study resulting in a correlation between living distance from power lines and childhood leukemia. I could not find a chink in that study except for the fact that the field strengths were calculated instead of measured. The physics of EM field strength is quite complex and does not lend itself to easy modeling (field strength does not follow a strict inverse square and depends on closeness of the two lines and other factors. The time of exposure had to be estimated, and on it went. All the physics was discarded as “theoretical” by epidemiologists and a community of people who feared power lines and other sources of EMFs. Pacific Gas and Electric declined to write and distribute a scientific pamphlet dispelling fears because that would be bad public relations. Instead, PG&E redommended “safe and doable prevention” tack. The US spent an estimated $14 billion relocating lines, schools, and offices. Home owners lost millions in assessed valuation. All the while, the authors maintained their methods were valid and their calculations were accurate. It took over 15 more surveys over a ten year span to amass enough epidemiologic information to diminish the association to credible unlikelihood. Yet many still fear carcinogenic effects of power lines even after the man who wrote the first fear-based book was found to have a brother who had an EMF measuring device company.

Another example of the entry of ideology into medical studies was the 1980s episode in which a Stanford sociology grad student studying the effects of mandatory abortion policy in China became involved in saving women’s pregnancies as a matter of personal (and religious?) conviction. He was taken off the study and dismissed from the department Although there are differences between that episode and the Iraq death count studies, one can also see a difference in the reaction to the knowledge of what happened. Academic silence pervades the Iraq study problem. Even the accompanying editorial to the third (NEJMed) study made no mention of biases covered in the National Journal article. And, by the way, I had never heard of the NJ before this, and on reviewing a few comments on it, have found it is considered to be a a non-controversial middle of the road periodical in Washington DC.

I point out these examples because we have examined them in other areas of ideological bias interfering with the doing of science. We have done the same with government commissions and scientific panels (Institute of Medicine,) as well as scientific reports (deaths from hospital infections, prayer in cardiac units, etc.)

So are we left with a sum of individual findings that are clues to bias in the Lancet II death statistics. 1) A calculated estimate amounting to 1 death per 40 people in a population of 25 million, in an area the size of California, over 3 1/2 years – an extraordinary number; 2) use of a method used for other surveys but not previously used in a war zone; 3) use of surveyors contracted from local populations, often hostile to the US/UK; 4) sample verification by those native surveyors, not by the primary investigators; 5) refusal of the authors to release raw data to the public; 6) funding from separate entities, one a known supporter of socially disruptive, anti-US organizations, through an associate not involved in the study but who felt the $40,000 + could be used in the study because it was related to “the issue” (unspecified;) 7) a separate similar study with results in the range of one-tenth that of the one in question, and a more recent one showing results 1/5th of the one in question; 8) publication of both Lancet sudies I and II within a month of a US national election; 9) the journal’s editor known for highly emotionally driven anti-US and UK administration speech, and seen speaking in video. Anyone seeing this number of actions suggesting bias or lack of controls on data in an article on homeopathy would immediately recognize them and consider them as significant and pertinent.

Add to all that the relative silence of the academic and editorial communities (an “anechoic effect”) even in the NEJMed editorial after the third study. One can feel the holding of breaths and the hoping that it will all just go away. One commenter noted that a Google search turned up hundreds of references to the matter, opposing my noting of the silence that followed publishing of the controversy. I referred to the silence in the medical and journal communities – a Pubmed search turned up fewer than 30 refs, and only 3 that seemed to refer directly to the Lancet II paper.

Although one cannot remove all politics from all medical science, especially in public health, one can strive to recognize clues to its existence and its distorting effects. Perhaps we can also strive to overcome our own biases and see defects in our thinking and reasoning even when it hurts.

Posted in: Medical Ethics, Politics and Regulation, Public Health

Leave a Comment (24) ↓

24 thoughts on “Iraq civilian deaths II: Summing up

  1. Michelle B says:

    A skill as valuable as being able to ask for/identify ample evidence, and not just meekly accept fluffy padding from bias and misconceptions, especially for extraordinary claims, is a skill not to be wasted. Keep it sharp, and it will be at hand for all situations, including the ones close to one’s heart.

  2. TimLambert says:

    It is misleading to just describe the National Journal as middle of the road when the author of the article attacking the Lancet studies (Neil Munro) was a strident advocate of the Iraq war. I comment on Munro’s behaviour here.

    To correct your findings:

    “use of a method used for other surveys but not previously used in a war zone”. This is untrue. Similar methods have been used in Kosovo, the Congo and Darfur.

    “use of surveyors contracted from local populations”. But this is true of any survey done in Iraq, including the NEJM you seem to like better.

    “sample verification by those native surveyors, not by the primary investigators” Not so. Riyadh Lafta was one of the primary investigators.

    “refusal of the authors to release raw data to the public”. This is the case in any medical study – you must protect the privacy of the subjects. They did release the data, but they first had to remove details that would identify the participants.

    “funding from separate entities, one a known supporter of socially disruptive, anti-US organizations” Soros did not fund the study. And while Soros is anti-Bush, that doesn’t make him “anti-US”. Unless you think 70% of the population of the US is anti-US.

    “a separate similar study with results in the range of one-tenth that of the one in question” This is just untrue.

    “a more recent one showing results 1/5th of the one in question”. The NEJM study came up with 1/4 the number of violent deaths.

    “Perhaps we can also strive to overcome our own biases and see defects in our thinking and reasoning even when it hurts.”

    That would be nice.

  3. BlazingDragon says:

    TimLambert, it’s closer to 80% of the population now (Bush’s latest approval ratings are down to 19% in one recent poll).

    Dr. Sampson seems to have latched on to the Lancet studies as being flawed and is trying hard to rationalize why they are flawed, even to distorting reality. He should really follow his own advice on looking out for biases.

    This does not mean the Lancet studies are perfect (or even good), but Dr. Sampson seems to be taking this personally and that’s never good when interpreting sketchy data.

    One other point that clouds this issue: The number of deaths in Iraq, whether you believe the Lancet studies or estimates that are far lower, are still way too damned high. It would have been better if the US had never invaded Iraq. We can say for certain that an awful lot of people have died that would otherwise still be alive today.

    It has been proven for certain that all of the reason for invading Iraq were, at best, incredibly biased mis-reads of existing data on Iraq’s WMDs (and many of them were outright lies). Too bad the Bush administration didn’t follow Dr. Sampson’s excellent list of ways to root out bias before interpreting intelligence with regard to Iraq. We could get back to talking about the point of this blog, which would be much better served taking down the idiots who promote woo-based quack “cures” and various health-related charlatans.

  4. David Gorski says:

    One commenter noted that a Google search turned up hundreds of references to the matter, opposing my noting of the silence that followed publishing of the controversy. I referred to the silence in the medical and journal communities – a Pubmed search turned up fewer than 30 refs, and only 3 that seemed to refer directly to the Lancet II paper.

    Aw, come on, Wally, you could at least do me the solid of naming me. ;-) After all, we’re on the same team, and you did Weing! You don’t even know him. But seriously, I’m sorry, but that’s not what you said in the first post. Your words from the first post:

    There has been remarkably little commentary on The Lancet editors’ actions, even in the US. This has probably been in deference to current political temperaments, frustrations with the Iraq war, and the current unpopular presidency.

    I apologize if I missed the context, but I did not note any qualification restricting your observation to the medical literature even in the context of the original post. Also, I can’t help but point out that it was tens of thousands of references to the study that I found, not hundreds (over 55,000, to be precise). Be that as it may, I will not dispute your intent; I will only point out that that is not the impression I got from what you actually wrote. Besides, I am much more interested in a couple of your assertions for which you have not as yet provided compelling evidence. From the first post, here is your introduction of the Lancet studies:

    Within that story is yet another – how editors contribute to fabrication, accepting or refusing to recognize fraud and misinformation. Yet another is that one cannot change some opinions, even after showing that the original information on which they were based was false. Sound familiar? We’ve been illustrating the point in classes for years.

    That sure was an ominous introduction. Later in that first post, you said:

    More incriminating is the fact that investigators have declined to open their original data to inspection and for confirmation, even to confirm that the work was done as stated. This violation of scientific ethics they explained away by claiming possible threats to the lives of the data gatherers and interviewees. Evidence for falsification mounted.

    Now, in this post you said:

    My commentary on Lancet I and II were not, nor were they intended to be scientific counter-arguments as some commenters complained. The post was to illustrate keys to recognition of bias as a source of error.

    I have to say, I was rather floored by this admission. When in the first post I saw you accuse the authors of the study of “fraud” and “falsification” and explicitly state that “evidence for falsification mounted,” I rather expected some serious scientific counterarguments to the study that showed that there was indeed falsification (or at least “evidence for falsification” mounting). After all, fraud and fabrication are very serious charges against a scientist. Charges that serious demand equally serious scientific counterarguments and concrete evidence of fraud, and yet here you seem to be saying that that was not your purpose. If that was the case, then why make the charge if your purpose was simply to show how bias can lead to error and the acceptance of error by editors?

    I alluded to this before in a less pointed fashion in the comments of the first post, but in light of your second post it’s worth reiterating now: Failing to provide serious and compelling scientific counterarguments that show that Lancet I and II rise to the level of fraud or fabrication is a serious shortcoming in your two posts and undermines your criticisms of the editors for the timing of publishing the studies. Had you restricted yourself to the question of whether these studies should have been published before elections, I would probably have mostly agreed with your points, but you went beyond that. Certainly I agree that the known bias of researchers and/or editors should make us cast a very skeptical eye on this study, and I know you feel passionately about perceived political manipulation of science. So do we all. But you do your argument no favors by making such inflammatory charges and then failing to back them up. If you think Lancet I and II are bad science, fine, that’s one thing. Make your arguments. If you think that the Lancet editors were politically biased for publishing these studies when they did. That’s fine, too. But in my mind charging or even insinuating fraud or fabrication requires a much more stringent level of critique and much more concrete evidence than you have provided.

    Also, your arguments on “prior probability” are not enough. After all, even if your “armchair estimates” were perfect, it could simply indicate bad science. It does not necessarily indicate fraud. Moreover, there’s a big difference between using prior probability to assess this study and, for instance, studies of homeopathy, which are what Dr. Atwood was primarily discussing in the context of prior probability. Even if not very plausible (at least in your view), it is still physically and scientifically possible that the estimates from Lancet I/II are accurate. The claims of homeopathy are so far beyond the pale that argument from scientific prior probability is far more compelling when applied to this pseudoscience than it is for a study like Lancet I/II. Indeed, in the case of Lancet I/II, even by your estimate we’re discussing one order of magnitude’s difference between the “true” level and the results reported. In homeopathy, it’s many, many, many orders of magnitude between what chemistry and physics tell us to be possible and what homeopaths claim homeopathy it can do. Can you honestly say that the level of implausibility of the Lancet I/II study results rises to anywhere near that of homeoopathy, reiki, acupuncture, or energy medicine? To reach the level of implausibility of homeopathy, the Lancet study results would have had to find an excess mortality rate greater than 100% of the pre-war population.

    Another aspect of your two posts that really, really bothers me is your continued repetition in both posts of the insinuation that somehow the investigators’ reluctance to share the raw data is a definite a sign of fraud or falsification or that the study is invalid. In some cases such reluctance to share can indeed be a sign of fraud as you alluded to in the first post, but in this case, as Tim pointed out, research ethics and privacy laws in the US demand that identifying information be scrubbed from raw data before it can be released, making the issue nowhere as clear-cut as you present it. Indeed, I can say from my experience in clinical research to date that, before they could share their data, the Lancet investigators would have had to submit a detailed addendum to their research protocol to their institution’s IRB for approval specifically requesting permission to share the data with investigators from another institution and explaining in detail what the other investigators would be doing with that information. It’s not enough just to say you’re going to “share the data.” Similarly, the investigators with whom the data is to be shared would have to submit a protocol to their institution’s IRB for approval. Alternatively, if the Lancet investigators were to try to propose to make the data freely available in some form to any researcher who requested it, the approval process would likely be even more onerous. They would again have to write a very detailed protocol about how the data would be stored, who could access it and how, and how they plan to protect the confidentiality of the study subjects. In some cases, if the IRB is really stringent, it might require that all research subjects be required to sign an updated informed consent form to reflect the new sharing arrangment–a requirement clearly not feasible in a war zone like Iraq. In any case, because of concern about the quite reasonable fears of danger to study participants and on-the-ground researchers, no IRB worth its salt would approve the sharing of the raw data without at minimum the de-identification of the data. Believe me, it is very much a nontrivial matter for investigators to share raw epidemiological data with other researchers and could easily take many months at the minimum to accomplish–or even longer or never if there is a particularly tough or picky IRB overseeing the study.

    And that’s all assuming that there is money to fund “fixing” the data to make it legally and ethically acceptable to share. Unless data sharing was incorporated into the original IRB-approved protocol, that’s unlikely.

    Once again, I’m not a great fan of the Lancet studies by any means. They are almost certainly significant overestimates, and the NEJM study is probably much closer to the “truth,” even though it might be an overestimate as well. I simply argue that you have not made an adequately convincing case that there was fraud or fabrication involved, despite having claimed that more than once. Finally, this most recent post appears to consist primarily of concluding the study is wrong because of bias of the researchers. Certainly history and evidence of bias in investigators are important considerations for the evaluation of a study, but in and of themselves they are not reason enough to dismiss it, particularly since, as Tim shows, most of your complaints about the study methodology itself appear to be in error or are simply not compelling. If it were the case that bias, funding, and previous history trump all, then I would be obligated to dismiss all pharmaceutical-funded research out of hand. Similarly, I would be obligated to dismiss out of hand studies on vaccine safety by Paul Offit or Eric Fombonne, both of whom are passionate advocates of vaccination and defenders of vaccines against antivaccination misinformation. Clearly I don’t do that. I do keep their viewpoint in mind and look very carefully at their methodology, which, as far as I’ve been able to see, has always been sound, just as I do for studies of so-called “CAM” done by CAM advocates, whose methodology, as you are all too familiar and have pointed out time and time again, often is not.

  5. Aaron S. says:

    OK, pretty much agree with Gorski on this. This seems like taking good criticism too far. I don’t see any “fraud” or the like anywhere.

  6. Oldfart says:

    It seems, Dr. Sampson, you have a couple of intelligent, well-educated, well-read challengers here, one of whom, at least, has charged you with telling outright lies.

    And, as of yet, no answer from you.

    Since I read your blog, as a layman, for clues about real scientific evidence and since you seem, by your silence, to have a political axe to grind, do I have to delete your blog as politically biased?

    In my poor layman’s estimation, the Lancet report totals were as completely out of line with reality as the Bush administration’s reasons for attacking Iraq in the first place. And I agree on the suspicious timing of the publication and could hear the screaming and applauding of the extreme and not-so-extreme left which always seems eager to prove the evilness of America and to accept the worst estimations as possible without criticism. I saw that all thru the Vietnam War. That does not mean, however, that I approved of either war nor that I approve of the policies of the forever war neo-cons. Still, when I attack them, I want to be using the best evidence available and if there are legitimate problems with the Lancet reports, then I want to know about it and I don’t want to have to sift thru political bias to get it.

    SO, if you have something to say to these two gentlemen and their very well-written criticisms, please say it.

  7. David Gorski says:

    It seems, Dr. Sampson, you have a couple of intelligent, well-educated, well-read challengers here, one of whom, at least, has charged you with telling outright lies.

    I hope you don’t mean me.

    Let me make this mind-numbingly clear and unequivocal as I possibly can: I did not nor do I charge Wally with lying or any sort of dishonesty.

    Ever.

    Period.

    Obviously, I think his analysis has serious shortcomings in it, and I pointed them out, but I have respected Wally for several years now for his tireless efforts against the infiltration of pseudoscience into medicine and have never seen a reason to consider him anything but scrupulously honest. Indeed, my longstanding respect for Wally coupled with the fact that we are co-bloggers “on the same team,” so to speak, is what made writing my comment above very difficult for me. Truth be told, I almost didn’t post it.

  8. MarkH says:

    I think the failure with this follow-up is to acknowledge your errors in the first article. I think we agree that these include:

    1. Inappropriate allegations of falsification
    2. Conspiracy mongering – esp with regards to Soros and the effect of bias.
    3. Fundamental inaccuracies about the authors sharing of data and instances of cherry picking irrelevant material to challenge their findings.
    4. Armchair math (shudder)

    The first line of this new article fails to set a proper tone. Emotional depth? The arguments were largely not emotional, you were challenged on the facts. Don’t try to hand-wave this away as some kind of hysterical misread by your critics. You made some major mistakes, the attacks on the article were more than a little cranky. Own up to it.

  9. David Gorski says:

    I would add that Wally also repeated the same fundamental inaccuracies about the authors’ sharing of data in the second article as well. That, perhaps more than anything else about Wally’s two posts, bothered me–so much so that I reluctantly threw my reticence aside. It’s not that I want to publicly argue with Wally. I really don’t. But I just couldn’t stay silent here.

  10. trrll says:

    There is also a severe lack of balance, in that Sampson acclaims the NEJM study as being more authoritative, even though it contains some of the same methodology that he condemns with the Lancet study, e.g. use of surveyors contracted from local populations. And he fails to mention possible reasons why the NEJM study might have been biased downward:

    1. Hesitancy of Iraqi citizens to share information that might be perceived as critical of the government with a government (surveyor). These are people with a history of living under a tyrannical regime, after all; it may be many years before some of them are willing to trust the government again.

    2. The fact that the NEJM study, while larger, was less comprehensive, in that the interviewers were less successful in visiting the most violent districts, and were obliged to substitute an extrapolation from the Iraq Body Count numbers, which are now conceded by nearly everybody to be low by several-fold. There is a chicken-and-egg problem here. One cannot assume that the correction factor for the IBC numbers is the same for the most violent districts as for the less violent districts, but to know the correct correction factors, they would have to survey those districts–which they didn’t dare to do. This is forthrightly acknowledged by the authors of the NEJM study:

    This adjustment involves some uncertainty, since it assumes that completeness of reporting for the Iraq Body Count is similar for Baghdad and other high-mortality provinces

    3. Possible bias on the part of the interviewers. Sampson suggests, without quotes or other evidence, that the interviewers in the Lancet study were biased in favor of an overestimate to make the US or the Iraqi government look bad. The possibility that government-sponsored interviewers might be biased in favor of smaller numbers that reflect better on government efforts to show progress in restoring order, was not even mentioned.

    4. Loss of witnesses. Some fraction of deaths will be missed, because everybody who knows of them has died or fled. This presumably will be greatest in the most violent district. This source of error is also acknowledged by the NEJM authors:

    The most serious concern is household dissolution after the death of a household member. Several demographic assessments have suggested that there has been an underreporting of deaths in the IFHS. The application of the growth balance method,7 with the use of the age distribution of deaths in the population obtained from the household roster, indicates that the level of completeness in the reporting of death was 62%. However, this estimation needs to be interpreted with caution, since a basic assumption of the method — a stable population — is violated in Iraq.

    Loss of witnesses is likely to be greater at later times, which would result in a downward bias in a study carried out at a later date.

    The emphasis on the “flaws” of a study that produces results that one does not like, while failing to mention limitations of a study that produces results that are more palatable, is one of the hallmarks of denialists, whether of global warming, evolution, or AIDS. Referring to “prior probability” seems to be an attempt to put a mathematical gloss onto bias. I think this reveals a pitfall in the use of prior probability on controversial topics. While the probability should ultimately converge on the correct value as the amount of data increases, when “prior probability” is assigned after one has knowledge of unpalatable studies, there is a temptation to bias one’s estimate of prior probability downward as a way of offsetting data results that one would like an excuse to ignore.

    I don’t know which study is actually the more accurate. There actually seem to be more methodological problems with the NEJM study than the Lancet study, but it is also a larger study. On the other hand, some other studies, such as the ORB study, seem closer to the Lancet estimate. And all of the new estimates are much higher than the estimates that were being put out by government sources prior to the Lancet studies.

    But larding the commentary on this topic with unsupported implications of “falsification” with respect the Lancet study has sadly soiled the nascent reputation of what I thought was going to be a scientific blog.

  11. Dacks says:

    With regard to the “political” aspect of the timing of the Lancet articles, here’s a thought experiment: Suppose the investigators were pro-war. Assuming the studies produced the same results, would it have been any more (or less) political for the investigators to hold the results of the studies until after the elections?

    It seems to me that the case for attributing personal bias to the figures from the study is not well founded, and detracts from the original question – which is the most accurate estimate, and what accounts for the variations between the different studies?

  12. Oldfart says:

    from Tim Lambert:

    “use of a method used for other surveys but not previously used in a war zone”. This is untrue. Similar methods have been used in Kosovo, the Congo and Darfur.

    This is untrue. = This is a lie.

    I have the same problem with people who claim that Condoleeza Rice was not lying in the lead up to the war. There comment is that she was simply telling untruths………

  13. Pingback: Deltoid
  14. trrll says:

    This is untrue. = This is a lie

    This is untrue.

    As any dictionary will tell you, a lie is an intentionally false statement. A person may speak an untruth through error or ignorance, but to accuse them of lying is to accuse them of being actively dishonest. In my experience, accusing somebody of lying without solid proof of intention (which is very difficult to show), invariably undermines the case of the one who makes the accusation. It benefits the accused, because it shifts the issue from the truth or falsity of the statement (which particularly in scientific issues is what is generally most important) to the honesty of the accused, and enables the target of the accusation to rebut by focusing on the injustice of the imputation rather than the facts of the matter (as Condeleeza Rice has been doing lately).

  15. David Gorski says:

    As any dictionary will tell you, a lie is an intentionally false statement. A person may speak an untruth through error or ignorance, but to accuse them of lying is to accuse them of being actively dishonest.

    Exactly.

    “This is untrue” does not equal “this is a lie,” unless the person making the untrue statement knows that the statement is not true when he makes it. Oldfart is just plain incorrect here. Tim was clearly not accusing Dr. Sampson of dishonesty; he was merely pointing out that Dr. Sampson was in error.

  16. Robert Chung says:

    David Gorski wrote:

    Once again, I’m not a great fan of the Lancet studies by any means. They are almost certainly significant overestimates, and the NEJM study is probably much closer to the “truth,” even though it might be an overestimate as well.

    Why would you think this?

  17. Oldfart says:

    Granted that, under normal circumstances, repeating an untruth is not the same as repeating a lie – however, we are talking about experts in their field here. We (laymen) must assume you all have equivalent knowledge. When two experts are testifying and their testimony is diametrically opposed, a layman has to assume then that one of them is either lying or accusing his associate of lying. How often can the evidence support both positions in such a way that one expert interpretation of the same data is an untruth rather than a lie?

    Tim Lambert again:

    “a separate similar study with results in the range of one-tenth that of the one in question” This is just untrue.

    Hmmm. Catch the connotation there?

  18. David Gorski says:

    Here’s an excellent summary of the attacks on the Lancet I/II studies:

    Jousting with the Lancet: More Data, More Debate over Iraqi Deaths

    It answers a lot of questions and is not 100% supportive. It also refutes a number of assertions that Dr. Sampson made. For example, note that the article conclusively shows that, contrary to Dr. Sampson’s assertions, the data was made available to other researchers. However, requests for the data from non-researchers were treated differently, which is entirely appropriate:

    Burnham and Roberts responded to these criticisms by pointing out that their Iraqi mortality data was “made available to academic and scientific groups in April 2007 as was planned from the inception of the study.” The release announcement stated that, due to “major ethical, as well as personal safety concerns, the data we are making available will have no identifiers below Governorate level.”

    The release announcement also stated that data would only “be provided to organizations or groups without publicly stated views that would cause doubt about their objectivity.” Les Roberts told me that condition was added “as a result of mistakes I made with the 2004 study. … I gave the data out to more or less anyone who asked, and two groups included … the neighborhood in Fallujah,” which the study authors had excluded from their calculations, due to the area’s extremely high mortality rates. “As a result, they came up with an estimate twice as high as ours, and it repeatedly got cited in the press as ‘Les Roberts reports more than 200,000 Iraqis have died,’ and that just wasn’t true,” he said. “So, to prevent that from happening again, we thought, if a few academic groups who want to check our analysis and re-run their own models want to look, that would be OK, but we’re not just going to pass it out [to anyone].”

    There seems to be no question that other researchers have had access to the 2006 Lancet study data. So, Munro and Cannon’s criticism is essentially that reporters like them have had limited access. Roberts confirmed that the Lancet study authors treat data requests from non-researchers differently. “What we wanted was to release [the data] to people that had the statistical modeling experience and a little bit of experience in this field,” he explained. When I asked Neil Munro whether he accepted that security concerns kept the Lancet researchers from collecting personal data or releasing non-collated data, he said, “That’s a perfectly coherent response. At the same time, others can judge whether it’s a sufficient response.”

    Clearly, Dr. Sampson is in error when he repeats the claim that the investigators would not release the data.

  19. Robert Chung says:

    David Gorski:

    I submitted a comment yesterday but it appears not to have made it through. I’m still wondering about your opinion above that the IFHS study (described in the NEJM) is “probably much closer to the truth [than the Roberts or Burnham studies], even though it might be an overestimate as well.”

    The one area that the IFHS appears clearly superior to the Roberts and Burnham studies is in sample size, but sample size only addresses sampling error, not bias. Since this blog is named “Science-based,” I’m interested in what you see that I do not, the basis for your opinion, and whether that would be informative about your priors.

  20. trrll says:

    Granted that, under normal circumstances, repeating an untruth is not the same as repeating a lie – however, we are talking about experts in their field here. We (laymen) must assume you all have equivalent knowledge. When two experts are testifying and their testimony is diametrically opposed, a layman has to assume then that one of them is either lying or accusing his associate of lying.

    That is a bizarre viewpoint, indeed. I can’t think of any field in which one does not encounter honest disagreement between people who are regarded as “experts.” A reputation as an expert does not exempt one from errors of bias, reasoning, or imperfect knowledge.

  21. Robert Chung says:

    Oldfart wrote:

    Granted that, under normal circumstances, repeating an untruth is not the same as repeating a lie – however, we are talking about experts in their field here. We (laymen) must assume you all have equivalent knowledge. When two experts are testifying and their testimony is diametrically opposed, a layman has to assume then that one of them is either lying or accusing his associate of lying.

    No, the layman can also conclude that the two are not equivalent in their expertise and adjust his priors accordingly.

  22. jre says:

    The tone of the discussion here has been far more thoughtful and temperate – as befits the forum – than the food-fights prevailing in many other threads dealing with the same topic. Since that’s the case, I hope that Dr. Sampson will not hesitate (for fear that the mob will turn ugly) to let us know if he has rethought his position.
    Dr. Sampson — like David Gorski, I have nothing but respect and admiration for your body of work. I also do not think for an instant that you would knowingly shave the facts to suit your opinions. So let me ask — and I will accept your answer at face value — do you still believe, after reading the responses here, that there is substantive evidence to make a reasonable person suspect the Johns Hopkins teams (either 2004 or 2006) of “fabrication … fraud and misinformation”?

  23. Aaron S. says:

    “Granted that, under normal circumstances, repeating an untruth is not the same as repeating a lie – however, we are talking about experts in their field here. We (laymen) must assume you all have equivalent knowledge. When two experts are testifying and their testimony is diametrically opposed, a layman has to assume then that one of them is either lying or accusing his associate of lying.”

    Bullshit.

    There is a such thing as honest disagreement and healthy scientific debate. I can’t possibly understand how someone other than Michael Moore could believe in such a dichotomy.

Comments are closed.