Corporate pharma ethics and you

Although I’m one of the few non-clinicians writing here at SBM, I think about clinical trials a great deal – especially this week.

First, our colleague, Dr. David Gorski, had a superb analysis and highly-commented post on The Atlantic story by David H. Freedman about the work of John Ioannadis – more accurately, on Freedman’s misinterpretation of Ioannadis’s work and Dr. Gorski’s comments. While too rich to distill to one line, Dr. Gorski’s post struck me in that we who study the scientific basis of medicine actually change our minds when new data become available. That is a GoodThing – I want my physician to guide my care based on the latest data that challenges or proves incorrect previously held assumptions. However, this concept is not well-appreciated in a society that speaks in absolutes (broadly, not just with regard to medicine), expecting benefits with no assumption of risk or sacrifice in reaping those benefits. Indeed, the fact that we change our minds, evolving and refining disease prevention and treatment approaches, is how science and medicine move forward.

Then, I had the opportunity to hear an excellent talk on pharmaceutical bioethics by Ross E. McKinney, Jr., MD, Director of the Trent Center for Humanities, Bioethics, and History of Medicine at Duke University School of Medicine. McKinney is a pediatrics infectious disease specialist who led and published landmark Phase I and Phase II trials zidovudine (AZT) for pediatric AIDS patients. While he continues working in this realm, McKinney also studies clinical research ethics, conflicts of interest, and informed consent. I was absolutely fascinated and refreshed by hearing from an expert who while describing and citing major ethical lapses in our system of drug development is also willing to propose solutions and do the hard thinking required for us to maximize the benefits we derive from pharmaceuticals while minimizing unethical behavior.

From his presentation abstract:

The system the United States uses to develop and approve new drugs and devices is fraught with ethical problems. On the one hand, tremendous strides have been made in the treatment of HIV, cancer, and heart disease. Drug development can work and save human lives. On the other hand, drug companies have repeatedly withheld vital information that directly affects human health. Sins of omission that cost human lives have become part of the cost of doing business. Why have we allowed this situation to evolve, and what can we do to improve ethical behavior on the part of the pharmaceutical and device industry.

(Related: See this Dr. Peter Lipson SBM post on our “tremendous strides” in heart disease.)

Evil drug companies

The primary case for discussion was the well-known Avandia episode where GlaxoSmithKline was shown in a 2007 NEJM paper by Steven Nissen to have knowledge of the increased cardiovascular risk of their PPARγ agonist diabetes drug, rosiglitazone, effects not reported for pioglitazone (Actos), a similar drug from Takeda. He then cited the 2008 Winklemeyer article in Archives of Internal Medicine that retrospectively assessed the risks of the two drugs in 28,000+ patients over 29,000+ patient-years and concluded that rosiglitazone was associated with 15% greater mortality and 13% more cases of congestive heart failure than in patients taking pioglitazone. It was in the public’s best interest that a prospective trial of the two drugs be done and while GSK ultimately tried to launch such a study, patient recruitment was hindered by news of rosiglitazone’s safety.

McKinney began by noting that we need to accept the fact that a pharmaceutical company’s primary mission is to produce a return for shareholders by bringing the most effective drugs to market for the largest population whose benefits far outweigh their adverse effects.  While sitting there, I also began to think about this concept more braodly: for readers who think that “drug companies” are evil profit-mongers, I encourage you to take a look at the precise stock holdings in the mutual funds of your 401(k) or 403(b) retirement accounts.

These are my words, not Dr. McKinney’s: It’s disingenuous and intellectually lazy to say that all “drug companies” care about is profits when many, many folks – including those objectors who populate the comment threads of this blog and others – benefit financially from the business practices of the industry. Let he who is without sin cast the first stone.

What would YOU do?

What I enjoyed next was that McKinney challenged the audience to declare what they would have done next if they were working for the company and their jobs and the jobs of others depended on the sales of what had become a $3 billion/year drug. He wouldn’t just let us sit passively and – for just a brief moment – you had to really think about being in the decision making shoes. I took a moment during the talk to pull up the Nissen paper and look at the actual numbers and look at the absolute risk of adverse effects instead of the relative numbers.  I encourage you right now to go to Table 3 and look at the actual numbers of myocardial infarctions and deaths from cardiovascular causes in control patients versus patients taking rosiglitazone in each of the trials. Yes, the analysis of the data as a whole showed that rosiglitazone exhibited significant risk but can you see how easy it might be to convince yourself that there wasn’t really a problem with your drug?

In another part of his talk, he challenged us (still as hypothetical company employees) to come up with designing a study to test our hypothetical new drug for mild-to-moderate pain and expressing whether we thought it best to compare against aspirin, ibuprofen, codeine, or celecoxib (Celebrex). What’s the right comparison drug to test yours against if you want to do the study correctly? Do you want to chance your $200/month drug against the pennies-per-dose aspirin or ibuprofen? Do you want to play hardball against equally-expensive Celebrex and risk that your drug might not perform better?

What’s the right study to do in the interest of patients?

What’s the right study to do in the interest of your continued employment?


McKinney also spent time talking about how the need for stronger disincentives for pharma management to behave unethically. The $2.4 billion that GSK had to set aside for Avandia litigation may not be large enough of a penalty. For a drug that had such a huge market, this might simply be viewed as the cost of doing business. Recent legislation to reward inside whistle-blowers personally might increase the revelations of wrongdoing similar to this week’s award, also related to GSK, to a drug manufacturing quality manager.

Finally, McKinney also spoke of the unavoidable conflicts of interest by academic investigators conducting industry-sponsored clinical trials – again reminding the uninitiated in the audience that the NIH funds vanishingly small numbers of clinical trials and that Pharma’s total clinical trial expenditures are roughly twice that of the entire NIH budget.

Caring too much can also be a COI

McKinney noted that conflicts of interest are not necessarily always nefarious or driven by money. As a physician who treats infants and kids with HIV/AIDS, McKinney stated that he has a conflict of interest in just simply wanting a new drug to work for his patients. Trying to keep kids from suffering is a strong motivator. In fact, the desire is so strong that if an investigator is not blinded, some bias may creep in on variables that are more subjective.

What can we do? We’re only addressing half the job by simply pointing out problems with the system. We have to propose and experiment with solutions. We have to work hard to minimize the introduction of any bias into studies. We have to provide strong disincentives to companies to behave unethically. But solutions will also have their own costs we must also be willing to accept. For example, if fines are levied that drive a major multinational company to bankruptcy, we must accept the loss of innovation to the collective worldwide drug discovery effort.

The solutions are not easy. The discussions are difficult. It’s just as easy to bleat that doctors don’t care if they kill patients because they take drug company money as it is to say that rainbows and unicorns flow forth from drug company research campuses. Having the discussions, pushing others to evaluate their own ethics, and thinking through tough financial and clinical decisions is grueling. I was delighted to have the opportunity this week to be pushed outside my comfort zone. It should happen more often.

Posted in: Clinical Trials, Pharmaceuticals

Leave a Comment (19) ↓

19 thoughts on “Corporate pharma ethics and you

  1. Dr Benway says:

    Sometimes speakers show this slide: “Conflicts of interest: none.”

    Because, of course, COI = money, lol.

    Money as a source of bias pales in comparison to narcissism. People often embrace positions to such a degree that the position becomes an extension of their own selves.

    Another source of bias is retaliation. People will oppose positions associated with individuals or groups that provoke their moral outrage.

    Might be fun to encourage a culture wherein more self-aware COI disclosures would be the norm. Hmm… maybe not.

  2. “Indeed, the fact that we change our minds, evolving and refining disease prevention and treatment approaches, is how science and medicine move forward.”

    It’s not just how science and medicine move forward, it’s how any enlightened person should behave. Consider this quote from Ben Franklin’s speech on the last day of the Constitutional convention of 1787:

    “For having lived long, I have experienced many instances of being obliged by better information, or fuller consideration, to change opinions even on important subjects, which I once thought right, but found to be otherwise. It is therefore that the older I grow, the more apt I am to doubt my own judgment, and to pay more respect to the judgment of others. “

  3. pmoran says:

    “Yes, the analysis of the data as a whole showed that rosiglitazone exhibited significant risk but can you see how easy it might be to convince yourself that there wasn’t really a problem with your drug?”

    Agreed, although is there not also the hope/expectation that treatments of type ll diabetes should REDUCE cardiovascular morbidity?

  4. JMB says:

    McKinney began by noting that we need to accept the fact that a pharmaceutical company’s primary mission is to produce a return for shareholders by bringing the most effective drugs to market for the largest population whose benefits far outweigh their adverse effects.

    I agree that there is a basic conflict of interest between protecting stockholders’ investment and the patients’ risks. It makes me wonder why we would allow publicly traded companies to become healthcare industries. The republicans had foolish faith that turning medicine into big business would improve the efficiency of medicine. Oddly enough, the healthcare reform design by the democrats seems destined to force the big business model on healthcare. Just look at the number of private practices that have now sold themselves to hospital corporations, and the demise of community hospitals. I always thought the best way to insure ethical conduct was to be able to sit face to face with the CEO making the decision about your healthcare, i.e. the doctor with their own practice sitting in the room with you, allowing your final say in the decision.

    Of course, such a cottage industry model is not possible for all aspects of the healthcare industry. In regards to the pharmaceutical industry, I think it would be possible to make the research part a cottage industry, whereas the production part could still be the big business model. That would require changing laws and regulations. The researcher would not be in direct contact with the patient, but in the cottage industry, the researcher would have to take full responsibility if original risk vs. benefit estimates could not be reproduced. As it is, it seems that a faceless entity in the pharmaceutical company takes the blame for the error. Either the narcissistic or altruistic tendencies of the researcher/CEO may be greater incentive to insure accuracy of claims, and would not be diminished by conflicts of interest in a public traded corporation.

    I am not implying that the change in laws and regulation would be an easy quick fix, but it ought to be considered. Pharmaceutical research is divided between academics and corporations, with money coming from industry and government grants. The trick is to convince more researchers to stay in academics, and move more of the initial production of drugs for testing into the academic field. That would probably require more government grants, but the tradeoff is that the duration of patents could be reduced, ultimately decreasing the cost of drugs.

  5. What is surprising and interesting to me about Gorski’s odd attack on–excuse me, “analysis of”–my article, and of the effusive and uncritical praise heaped on that analysis by his small but amazingly devoted band of supporters here (apparently including you, Dr. Kroll, given your reference to it in your post here), is how unscientific these responses to my article are. There are, of course, many alternative medicine fans–or as Gorski charmingly likes to call them, “cranks”–who have misunderstood Ioannidis’ work as proving the inferiority of the scientific process to whatever process it is that supposedly supports alternative medicine. It most certainly does not, and my article is clear about that. But anyone who wants to accuse me of misunderstanding or misrepresenting Ioannidis’ work has a little explaining to do–namely, how I managed to supposedly get it so wrong in an article filled with direct quotes from many, many hours of recorded and transcribed one-on-one conversation with Ioannidis, an article that was fact-checked by editors who explicitly consulted Ioannidis on every single quote and statement about him and his work that appears in the article, including my paraphrasing of his work and thinking. What Gorski and his acolytes are essentially saying is that Ioannidis doesn’t understand his own work. Gorski has repeatedly claimed that Ioannidis’ work doesn’t show that anything is wrong with the basic infrastructure of medical research, but rather that Ioannidis’ works just illustrates that, as we all know, some types of studies are weaker than others. Gorsky insists that Ioannidis’ work backs up the notion that the astonishingly high percentages of published findings that are later refuted isn’t a sign that something’s wrong with the picture–that’s just the wonderful scientific process doing its job, rooting out that weaker stuff, so hurray for science working exactly the way it’s supposed to! Anyone who thinks Ioannidis’ work shows there are fundamental problems with the very enterprise of medical research is, according to Gorski, hopelessly misinterpreting his work, or is even a crank. Well, OK, Gorski has a right to that opinion–but it’s not what Ioannidis believes. I know, because I asked him, and he answered me, in person, at great length, in great detail and with perfect clarity. I pass on those answers in the article. That’s what the article is–a report on what Ioannidis believes, not my interpretation of Ioannidis’ work. Yes, I’m a biased journalist who wants to write exciting, controversial stories and to sell books, and who is very capable of getting things wrong, and I’ve always been very open about that and try to do better. But how I could be as far off the mark in representing Ioannidis’ claims and beliefs as Gorski claims I am would have to be baffling to anyone who actually reads the article. Of course, I can’t help noticing that some of Gorski’s acolytes don’t feel much need to know anything about me, the article, or Ioannidis’ work, to confidently express their outrage at my supposedly having gotten it so wrong. I’d have thought that people who like to consider themselves scientific thinkers would have a little more respect for actual evidence and direct observation–as for example, paying close attention to what Ioannidis himself says in interviews rather than going by what Gorski believes Ioannidis’ work is all about. I have tremendous respect for Gorski and his outstanding record of pointing out the ways in which some studies are weaker than others. But let’s be perfectly clear about something. The claim that Ioannidis’ work reveals that there are real problems with the general credibility of medical research comes from Ioannidis. The claim that Ioannidis’ work merely points out weaknesses in some types of studies but vindicates medical research overall comes from Gorski, not Ioannidis. Ioannidis of course believes in science, and not in alternatives to it. (Me too!) But unlike Gorski, Ioannidis does not let his frustration over the mostly terrible thinking behind alternative medicine prevent him from recognizing just how serious the problems with medical research are.

  6. takoyaki says:

    Might it be possible to design a trial for a drug such as Avandia that would periodically run appropriate tests on study participants to determine what amount of risk Avandia may incur on each participant’s cardiovascular system?

    Obviously, such a study would have to exclude those with pre-existing CV conditions, family histories of CV problems, and so forth. Of course, it would take more time and cost more money to do these things, which no corporation likes.

    I suspect that finding out sooner than later that your potential blockbuster is a real killer that will cost you more in lawsuits than you would make on it, while disappointing, would be a relief.
    With respect to the new painkiller, it’s hard to say what the closest comparison is. If it’s a novel mechanism, I’d probably go against Celebrex and also do risk-factor testing WRT CV problems as discussed above on study participants to determine if my product had a lower risk of adverse CV events than Celebrex. If closer to an opioid, I’d go against codeine, and test against addiction potential. If closer to an NSAID, I’d go with ibuprofen and test against gastrointestinal side effects. IANAD, just trying to cover all the bases.

    Thank you, David, for a thoughtful, insightful and interesting challenge.

  7. David Gorski says:

    Anyone who thinks Ioannidis’ work shows there are fundamental problems with the very enterprise of medical research is, according to Gorski, hopelessly misinterpreting his work, or is even a crank.

    Straw man. We write about the fundamental problems in biomedical research here fairly frequently and have referenced Ioannidis on a number of occasions. The point was that you exaggerated to the point of painting a picture of SBM in your article as so fundamentally unreliable that we might as well all become acupuncturists or reiki masters. Also note that not everyone agrees with Ioannidis’ analysis. Indeed, he’s been accused of “circular” reasoning and overestimating:

    I had forgotten about that criticism; otherwise I would have added a section on that.

    In any case, looking at your body of work, at least as shown on your blog (and in particular your post about Andrew Wakefield that I cited in my post in which you represented Wakefield as being representative of the problems in SBM), I see that you have a definite point of view. That’s fine. However, in your article about Ioannidis, your point of view seems to be to spin SBM as negatively as possible filtered through Ioannidis’ work. That post about Andrew Wakefield demonstrates your tendency quite well.

    That’s OK. You can interpret Ioannidis anyway you like. However, as I’ve pointed out, I like Ioannidis’ work a lot and have read many of his actual scientific papers. I think I know what Ioannidis believes, unless what he writes in his scientific papers is not actually what he believes, something I highly doubt.

    I pass on those answers in the article. That’s what the article is–a report on what Ioannidis believes, not my interpretation of Ioannidis’ work.

    Oh, please. You’re a journalist. You should know better than I that it’s impossible to write a piece like that without imposing your interpretation on it to some extent. You do that through the selection of quotes (seriously, how many hours of interviews do you have, from which you had to select a few key quotes?), the way you describe findings, etc. It’s not a dry, objective profile of Ioannidis.

    But unlike Gorski, Ioannidis does not let his frustration over the mostly terrible thinking behind alternative medicine prevent him from recognizing just how serious the problems with medical research are.

    Actually, I’d put it another way. I insist on one standard of evidence and reject the false dichotomy of “alternative” versus scientific medicine. There’s just medicine, and there are only three types: medicine that has been scientifically validated to work; medicine that has not; and medicine that has been shown scientifically not to work. Nearly all of alt-med falls into the latter two categories. Moreover, as Ioannidis’ work itself shows, the more improbable a hypothesis, the more likely you are to see false positive studies, and alt-med hypotheses are about as improbable as they come.

    As for “denying” problems in SBM, let me remind you of the concluding paragraph of my post:

    To paraphrase Winston Churchill’s famous speech, many forms of medicine have been tried and will be tried in this world of sin and woe. No one, certainly not those of us at SBM, pretends that SBM is perfect or all-wise. Indeed, it has been said (mainly by me) that SBM is the worst form of medicine except all those other forms that have been tried from time to time. I add to this my own little challenge: Got a better system than SBM? Show me! Prove that it’s better! In the meantime, we should be grateful to John Ioannidis for exposing defects and problems with our system while at the same time expressing irritation at people like Freedman for overhyping them.

  8. daedalus2u says:

    Ed Brayton over at Dispaches on Sb had a post on the journalistic ethics of covering the Stewart/Colbert rally in Washington. As I was thinking about it, I was reminded of Feynman’s use of the cargo cult metaphor to describe the way that some people do what they call science, but which Feynman called “cargo cult science”. It was a type of activity where the practitioners went through the motions of doing “science”, but didn’t have the intellectual integrity to actually honestly reconsider all of their assumptions and their basic premises. All CAM clinical trials are examples of cargo cult science, where in the end when the CAM treatment works as well as placebo, they conclude “placebos work too!”.

    I think there is a similar type of ethics, “cargo cult ethics”. Where you go through the motions but don’t have the intellectual integrity to be honest, even with yourself. There are different degrees of this, the false “balance” of mainstream media in covering controversial topics is an example. It is false and unethical to pretend that evolution and creationism are comparable descriptions of how life changes on Earth.

    It is false and unethical to pretend that antidepressants are no better than placebo. Clinical trials showing marginal effects show marginal effects because those trials are in people with less serious depression. There are essentially no trials of antidepressants tested against placebo in the seriously depressed. There reason there are no such trials is because they are highly unethical. Depression is a serious life threatening disease, one that kills more people each year than does HIV. It is unethical to treat people with serious life threatening diseases with placebos when there are known effective treatments. The only trials of antidepressants against placebos that can be done are trials where the potential harm to a patient with depression of a measurable degree by being treated by placebo is very small. Not surprisingly such trials show marginal effects.

    As someone who considers himself one of David Gorski’s “acolytes”, I really do think that David Freedman got it wrong, and despite interviewing Ioannidis got his major conclusions wrong too.

    The essence of Freedman’s mistake is that he conflates the funding and publishing of scientific medical research with “Medical Science”. Ioannidis doesn’t make that mistake, and none of the quotes in Freedman’s article indicate that he does. What is interesting that there are no quotes that explicitly call out the bias and distortions induced by the “publish or perish” paradigm that controls career advancement and acquisition of funding.

    It is unfortunate that the need scientists have to eat, have clothing and a place to live as well as do their research, compels them to obtain funding. It is doubly unfortunate that those who control the funding dole it out to those who publish in flashy journals and press releases. What does anyone expect? Scientists know this, and that is why they are not surprised or upset at Ioannidis’ publications.

    What non-scientists seem to think is the solution to this problem is to whack down individual scientists while keeping the system of funding Science the same.

    Aubrey Eben said: “Science is not a sacred cow. Science is a horse. Don’t worship it. Feed it.”

    But people don’t want to feed something they do not worship. They don’t want to pay for negative results, but in science you don’t know the results until after you do the research. They are willing to fund “the best” scientists, but without understanding an individual scientists work, you can’t tell who is “the best”, and you can’t compare individuals in different fields because they are orthogonal. There is no shortcut to understanding science. There is no shortcut by which a non-expert can evaluate the opinions of experts except by becoming an expert themselves. Non-experts may not have the time, the ability, or the inclination to become experts in a field where they must utilize the findings of experts but simply wanting there to be shortcuts that allows for the evaluation of expert opinion without being an expert doesn’t make it so.

    It is ironic that David Freedman uses the term “unscientific” to describe Gorski’s analysis, and is “baffled” as to how anyone could read his article and agree with Gorski’s analysis of it. I suspect that some of Freedman’s bafflement is due to his misunderstanding of Gorski’s writings on CAM, and Freedman’s attempt to use a similar style in writing about Ioannidis’ finding about medical research and medical research publications.

    I haven’t read his book “Wrong”, but this review tells me a lot.

    Beating up on medical research without presenting an alternative that is better is of negligible value. Gorski even paraphrases Churchill in saying that SBM is the worst possible systems of medicine, except for all the others that have been tried. If beating up on medical research drives people to less reliable methods then it is worse than useless, it is actually harmful. Highlighting the flaws of a particular system without mentioning what system you want to use instead is a classic argumentative style. It is not a style that leads to good or reliable answers in any other circumstance, why should it in the field of medical research.

    Certainly Gorski, and no one who understands his writings well enough to be considered one of his “acolytes” is unaware of many of the flaws of medical research. To focus on only the flaws and conflate “medical science” with “lies” and “damned lies” may sell a lot of books, it does not make anything better. It does not help in addressing those flaws. Pretending that it does is disingenuous and is an example of what I would call “cargo cult ethics”. You haven’t said anything that is factually wrong, but by omission you have biased the perceptions of your readers.

  9. Wow. I take a break from being online and a lively discussion ensues!

    @Dr Benway: I love your concept of more self-aware COIs! McKinney actually presented several of his and actually led off the talk with one set (rather than burying it on the final slide).

    @Karl: It’s not just how science and medicine move forward, it’s how any enlightened person should behave.

    So true.

    @pmoran: Yes, the treatments should reduce CV disease long-term but I don’t believe that one would expect any positive benefit on these endpoints within the year or less of each study.

    @JMB: The model of moving drug development to non-profits has largely not proven successful but I know that several efforts are ongoing. More academic centers boast centers for drug discovery but time will tell whether these efforts bare fruit. That’s a good analysis for a future post.

    @takoyaki: Indeed – it’s a fun exercise in which to engage. One can come up for reasons to pick any of the comparitor drugs. That’s why I enjoyed McKinney’s talk so much.

  10. @David Freedman: Thank you for coming by to comment. I enjoy Dr. Gorski’s writing and consider him a colleague but I would not say that I was his acolyte. I disagree with him on occasion but here I felt that some of your article in The Atlantic was unnecessarily inflammatory and fear-mongering. Actually, I thought that Dr. Gorski’s comments were in part supportive of your writing and certainly respectful of the work of Ioannidis.

    To get on topic with regard to this post, McKinney did raise one point of relevance in your dispute: repeating clinical trials is very expensive and it is sometimes to not possible to recruit patients for subsequent trials after negative data are published. I have the luxury of working with cell culture models on studies I can repeat over and over without spending tens of millions of dollars. With clinical trials, one is making a huge investment in the clinical investigation of science as it existed three or five years ago. Therefore, some trials may prove incorrect simply because of the time it takes to complete them!

    @David Gorski: Thanks for responding to Mr. Freedman’s essay in this thread – it was more appropriate for you to respond anyway.

    @daedalus: Many good points. You also speak to non-financial conflicts of interest.

  11. David Gorski says:

    One other thing. Steve and the crew at SGU discussed this article as well on their most recent podcast:

    The discussion of Freedman’s article begins around the 25:20 mark.

  12. JMB says:


    Welcome to the discussion. You have had the chance to spend time with Dr Ioannidis, so you have a good foundation to state Dr Ioannidis’ position. Unless Dr Ioannidis or one of his research associates chooses to respond in this thread, nobody could dispute your version of Dr Ioannidis’ thesis. Of course, Dr Gorski has every right to set the record straight on the interpretation of his anecdote.

    I enjoyed your article in Atlantic. If Dr Ioannidis thinks science based medicine is worse than damned lies, then he has made a simple error. Dr Ioannidis studied published medical science claims, not what claims science based medicine selects as valid from the published claims.

    Dr Ioannidis may call those claims that don’t stand up to scientific scrutiny as damned lies, personally, I would call them dead end hypotheses. A researcher can’t be right with every hypothesis.

    At what point does bias invalidate experimental results? Can you argue that Athina Tatsioni did not have a bias when she was dared to produce the data to prove the hypothesis? Would Dr Ioannidis’ methods indicate a PPV of greater than 50% for her published scientific claim?

  13. weing says:


    I read your article. What I got out of it was that scientists suffer from confirmation bias and the need to publish or perish has led to publishing a lot of crap. I agree with that. I don’t know what message a non-medical person would take away from your article. “Since medical reports are wrong a lot of times, therefore my chosen brand of woo is valid.”? Maybe. As I said, I don’t know. It appears that Dr Ioannidis is aware that he is also susceptible to confirmation bias.

    BTW, after reading the title of your book, do you consider yourself an expert? :)

  14. JMB says:

    I assumed that David Freedman accurately characterized Dr Ioannidis’ interpretation of the mathematics presented in the PLoS Medicine article. It is possible to agree with the mathematics in the article, but at the same time see a different interpretation (just like different interpretations of quantum physics accept the Schrodinger equation as valid, but differ in the interpretation).

    Dr Ioannidis appears to classify a scientific claim as valid only if the PPV of the relationship is 50% or greater. However, as Dr Ioannidis notes,

    Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds.

    which means that studies that will result in significant advances in medicine will be initially classified as invalid based on the PPV criteria. This mathematical definition of scientific validity has limited usefulness if it consistently initially classifies all breakthroughs as invalid. A more complex definition of validity is necessary if we hope that it will more consistently identify the initial studies that may result in breakthroughs, as being scientifically valid.

    I think the more valid Bayes approach is to model the research process as having multiple levels of evidence reliability (, with different criteria at each level as to what constitutes sufficient validity to proceed to the next step. Therefore, achieving a PPV of 0.10 at the lowest level of inspiration from basic science or clinical observation would be sufficient that the hypothesis then would be considered valid enough to be investigated at the next level, culminating (where appropriate) at the top level of multiple large randomized clinical trials, reaching the final PPV of 0.50 after multiple studies of different types. The Bayes approach recognizes that probabilities are constantly updated with new information. Breakthroughs begin with low a priori probability, but with each phase in the research process becomes higher a priori probability. Eventually, the a priori probabilty becomes high enough that we try to determine a reliable risk benefit ratio, and use it in science based medicine.

  15. rork says:

    The statisticians in my group read Freedman’s article unencumbered by Gorski’s sort-of-like-a-review or knowledge of the author being on anyone’s shit-list and found it pretty good. Dissenter on SBM stuff that is fuzzy or not very consequential may not bother commenting in order to avoid unwinable (and very unpleasant) arguments about trivia.
    On my own agenda: I think there is a very serious problem in my area of interest, which is basic science with large sets of measurements, where I find fudging almost universal even in the best journals, and review of papers extremely incompetent. It is a crisis I think. Check out about Nevins retracting a paper, showing effective review requires the writer to be clear, and the reviewer to have serious time (I think everyone can read it this week). Usually, neither is true.

  16. ConspicuousCarl says:

    Everyone is always worried about that demon called “greed”.

    What about pride? Even if we somehow removed the financial incentive to conduct biased research, or increased the financial dis-incentives, we would still have to worry about the core bias present in all scientific study. Individuals and organizations are always more likely to find positive results when testing their own inventions, even if they are charities.

    If there are too many falsely-“successful” trials coming from pharmaceutical manufacturers, then the FDA needs to do what is done in all scientific disciplines: rely less on the inventor’s own experiments and use external evaluations. If a manufacturer tells the FDA it is going to do a phase I, II, or III trial for approval, the FDA could require them to provide blind funding for the same trial to be conducted by someone else at the same time.

    If bad research is going to be exposed by third-party trials, then ALL incentive for sloppy research is gone, financial or otherwise.

  17. JMB says:


    I’m curious, did anybody in your group find it odd that we are talking about validity in a Bayes formulation of the research process? I thought validity was a frequentist concept. I was also under the impression that there is a problem interpreting bounds of error of results when the variance term for the probability density function occurs in the denominator of a (Bayes) model of the experimental design. I am asking, not arguing. Back when I wrote a program for calculating a probit analysis for a pharmacology lab, the confidence intervals were in the logarithmic domain, so they would look skewed if the concentration (as opposed to log concentration) for LD50 was reported. Of course, Ioannidis is not reporting a confidence interval for his estimate of percent invalid research claims. Maybe I just start to itch when somebody throws a probability density function into a denominator based on the amount of time it took me to verify that skewed confidence intervals were correct because of a logarithmic transform.

    I would also be interested in your opinion about the importance of the model of the research process in the application of a Bayes’ analysis. That is something I am arguing. The general strategy of medical research is a chain of events in which less risky and less resource intensive investigations are completed to filter hypotheses before more expensive and potentially risky experiments (RCTs) are applied. Is it fair to flatten out this pyramid scheme and declare that 90% of research is invalid? Oh dear! Or is it better to use the Bayes analysis applied to the model of the chain of events (different types of experiments) producing a filter of hypotheses based on successively stricter standards for a priori probability (which is the a posteriori probability after completion of a previous step) for entrance to the next level of research? Then the overall strategy is considered successful because the majority of experiments in which we encounter ethical problems do prove successful. At the same time, we do evaluate hypothesis with low a priori probability (albeit, still greater than 0.005), so that breakthroughs will gradually filter through the pyramid of medical research (a pyramid that exists to make efficient use of resources and insure clinical equipoise).

  18. Dr Benway says:

    David Freedman,

    Some style advice to make your writing seem less cranky:

    1. Wall of text. Look it up.

    2. “Acolytes.” All cranks use some version of this –e.g., “followers,” “disciples,” “supporters.” It’s an insult. And it implies that you haven’t considered the possibility that several individuals might agree not because they’re in a cult, but because they’ve looked at the available evidence and have formed their own, reasoned opinions.

    3. “Oh noes!!” or “Fundamental problem with entire field of study,” “What if everything we know about X is wrong.” Sweeping criticism of an academic discipline presented without at least one or two practical solutions illustrating how people might improve the status quo invariably invites the loonies to go all, “viva la revolucion!”

  19. antipodean says:

    What a fabulous discussion to have stimulated Dr Kroll.

    I might guess that the disagreement here is based on the message Ioannidis sends to honest scientific investigators and how that can be misinterpreted by those who have financial interests in pulling down standards so they can profit from quackery.

    This looks quite different depending on whether you actually work in clinical trials or, like the many excellent writers on this site, you are keeping the barbarians at bay. For me, as a clinical triallist, reading Ioannidis is like having your brain sharpened. His research group shold be listed as a national treasure. The problem being that his thinking is published in high visibility journals and invariably leaks outside of science to be misused by various quacks.

    Rork- slight rant, but- your comment re basic research sounds about right. Only the ‘best’ (i.e. confimatory) data are presented. Even those are often modified in photoshop and carefully selected. The funding for much of this is targeted disease funding and there is rarely appreciation that this should actually be used in some way that could be usefull to humans. A symptom of this seems to be the continual mistranslation of translational research as being first time in humans rather than in all humans who need it.

Comments are closed.