Articles

Iraq civilian deaths II: Summing up

Call me naive, but I did not expect the volume or the emotional depth of the responses to the Iraqi civilian death post. I thought many would respond to the new NEJMed survey as I did; wondering about the validity of the previous surveys and recognizing that they have a validity problem. And, that there is a question about what is printed in major journals, from unexpected sources. I did not mean that studies such as Lancet II not be printed. I stated that it should not have been printed in a first line journal for the general medical public. It could have been printed in a 2nd or 3rd line specialty journal where its methods and conclusions could have been debated and reforms shaped by colleagues. I find that hints and clues to errors in pseudoscientific reports mostly lie in the methods section. But questioning a study’s validity can involve more than just a knowledge of the methods and recalculation of the data. Because the “CAM” movement has redefined the borders of the playing field as well as the rules of the game, the entire environment of the scientific system surrounding implausible or unusual reports has to be examined – this goes beyond limits of methods, and includes motivations, funding, characters, and subtexts.

In developing criteria for estimating plausibility (prior probability) the most important criterion of course is consistency and consilience with established knowledge. But there are more. One can increase the effectiveness of investigation by using indicators not presently included in “Evidence Based Medicine” or in science, but that are used in criminology (previous arrests, convictions,) business (trustworthiness, profit vs loss,) and ideology and politics (elevation of the trivial, manipulation of the system; example: sectarian medicine.)

I will elaborate on a scientific criteria scale later. Pertinent to the Lancet papers controversy, and as asked by Harriet Hall, one may have to look at personal motivations other than just direct economic conflicts. These include previous writings – especially on philosophical, ideological subjects, funding sources (the score or more of private and public sources directed to support of sectarian medicine, and now, even political stances that can affect experimental design, timing, expression, and interpretation. Whether we like it or not, the public and legal staus of sectarian systems depends more on these non-scientific qualities than on scientific ones. Much of our critiques of the NCCAM and the academic sectarian movements already involves such extra-data and extra-scientific material. And much of that is more useful practically than the scientific stuff.

Weing, an early commenter, gave a cynical summary of the Lancet I/II and discussion, stating, don’t believe everything you read, and politics trumps science – big deal. Well, for some it is a big deal, as many missed the point of the post, which was to illuminate how ideology can affect science in unrecognized ways. The point was not that I had better ways of estimating death rates from 10,000 miles away, but that there are times when one can use simple head methods to estimate ranges (plausibility) when faced with surprising results. One can suspect when to look futher into sources of error, and more importsant, sources of potential bias. This is when ideological and political biases have to be examined, not conveniently ignored. The unique nature of “CAM” success has been in changing attitudes and ideological imagely, not merely in falsifying or manipulating data.

Lancet I and II were a real time example of the need for estimating plausibility when faced with both an extraordinary result and a paucity of basic (scientific) data on which to base an estimate. Especially when the biases and errors fall come from “our side.”

The Lancet I and II papers exemplify that prior belief molds what we study, how we study, and the conclusions we draw. And, when part of this big deal counters something we have strong feelings about, we sometimes handle the situation by mustering arguments from an army of angles, so as to reject the message, and to maintain original beliefs. So add to weing’s Big Deal, the principle of heuristics, the angles of biases, and the theory of cognitive dissonance.

All the above is fancy language for an appeal to heed clues to systematic bias. My commentary on Lancet I and II were not, nor were they intended to be scientific counter-arguments as some commenters complained. The post was to illustrate keys to recognition of bias as a source of error.

Epidemiological studies have well recognized problems of sampling and sampling error, that lead to erroneous results. A notable example of this was Wertheimer’s epidemiological study resulting in a correlation between living distance from power lines and childhood leukemia. I could not find a chink in that study except for the fact that the field strengths were calculated instead of measured. The physics of EM field strength is quite complex and does not lend itself to easy modeling (field strength does not follow a strict inverse square and depends on closeness of the two lines and other factors. The time of exposure had to be estimated, and on it went. All the physics was discarded as “theoretical” by epidemiologists and a community of people who feared power lines and other sources of EMFs. Pacific Gas and Electric declined to write and distribute a scientific pamphlet dispelling fears because that would be bad public relations. Instead, PG&E redommended “safe and doable prevention” tack. The US spent an estimated $14 billion relocating lines, schools, and offices. Home owners lost millions in assessed valuation. All the while, the authors maintained their methods were valid and their calculations were accurate. It took over 15 more surveys over a ten year span to amass enough epidemiologic information to diminish the association to credible unlikelihood. Yet many still fear carcinogenic effects of power lines even after the man who wrote the first fear-based book was found to have a brother who had an EMF measuring device company.

Another example of the entry of ideology into medical studies was the 1980s episode in which a Stanford sociology grad student studying the effects of mandatory abortion policy in China became involved in saving women’s pregnancies as a matter of personal (and religious?) conviction. He was taken off the study and dismissed from the department Although there are differences between that episode and the Iraq death count studies, one can also see a difference in the reaction to the knowledge of what happened. Academic silence pervades the Iraq study problem. Even the accompanying editorial to the third (NEJMed) study made no mention of biases covered in the National Journal article. And, by the way, I had never heard of the NJ before this, and on reviewing a few comments on it, have found it is considered to be a a non-controversial middle of the road periodical in Washington DC.

I point out these examples because we have examined them in other areas of ideological bias interfering with the doing of science. We have done the same with government commissions and scientific panels (Institute of Medicine,) as well as scientific reports (deaths from hospital infections, prayer in cardiac units, etc.)

So are we left with a sum of individual findings that are clues to bias in the Lancet II death statistics. 1) A calculated estimate amounting to 1 death per 40 people in a population of 25 million, in an area the size of California, over 3 1/2 years – an extraordinary number; 2) use of a method used for other surveys but not previously used in a war zone; 3) use of surveyors contracted from local populations, often hostile to the US/UK; 4) sample verification by those native surveyors, not by the primary investigators; 5) refusal of the authors to release raw data to the public; 6) funding from separate entities, one a known supporter of socially disruptive, anti-US organizations, through an associate not involved in the study but who felt the $40,000 + could be used in the study because it was related to “the issue” (unspecified;) 7) a separate similar study with results in the range of one-tenth that of the one in question, and a more recent one showing results 1/5th of the one in question; 8) publication of both Lancet sudies I and II within a month of a US national election; 9) the journal’s editor known for highly emotionally driven anti-US and UK administration speech, and seen speaking in video. Anyone seeing this number of actions suggesting bias or lack of controls on data in an article on homeopathy would immediately recognize them and consider them as significant and pertinent.

Add to all that the relative silence of the academic and editorial communities (an “anechoic effect”) even in the NEJMed editorial after the third study. One can feel the holding of breaths and the hoping that it will all just go away. One commenter noted that a Google search turned up hundreds of references to the matter, opposing my noting of the silence that followed publishing of the controversy. I referred to the silence in the medical and journal communities – a Pubmed search turned up fewer than 30 refs, and only 3 that seemed to refer directly to the Lancet II paper.

Although one cannot remove all politics from all medical science, especially in public health, one can strive to recognize clues to its existence and its distorting effects. Perhaps we can also strive to overcome our own biases and see defects in our thinking and reasoning even when it hurts.

Posted in: Medical Ethics, Politics and Regulation, Public Health

Leave a Comment (24) ↓