Articles

Conflict of Interest in Medical Research

The cornerstone of science-based medicine is, of course, scientific research. The integrity and quality of biomedical research is therefore of critical importance and to be thoughtfully and jealously guarded, if we care about maintaining an optimal standard of care. There are many threats and hazards to the institutions of medical research – mostly ideological. One that has not been discussed much on this blog but has been in the news recently is that of conflict of interest. Upon close examination this is a more complex issue than it may at first appear.

The most recent controversy over conflicts of interest were sparked by an article published in JAMA in which the authors allege that published studies that downplayed the risks of Vioxx (A Cox-2 inhibitor marketed as a pain killer that was removed from the market for increased cardiac risk) were in fact ghost-written by employees of Merck, the manufacturer of Vioxx. The names of two academic researchers were then attached to the studies to give them legitimacy. If true this is a damning episode, and no one would reasonably disagree with the contention that companies writing research on their own products represents an unacceptable conflict of interest. For the record, both Merk and the one surviving academic deny the accusations completely.

Many other forms of potential conflict of interest have also been raised in the discussion of medical research. Journals now require that authors disclose any financial ties to the subject of their research. Some of these conflicts are clear, for example – owning stock in the company who manufactures or sells the object of research, or holding a patent in the drug or technology. In any situation in which the outcome of research will affect the value of a researcher’s stock or patent there is a clear and unambiguous conflict.

Other conflicts that are fairly clear include researchers or academics who receive large sums of money for consulting fees or similar from a company that they then conduct research for. If a substantial portion of one’s income could be threatened by an unfavorable outcome of one’s research, a conflict is clear. Regarding regulatory agencies, it is clear that regulators should not be rendering decisions regarding companies from whom they are accepting consulting fees or for whom they expect to work.

Then there are more subtle or questionable forms of conflict, and this is where there is great controversy. Minor ties to companies are common for academics. For example, small speaking fees are often given to experts with reputations in their field. Recognized experts are also frequently consulted for their expertise. Experts, for example, are often consulted to help decide fruitful avenues of future research, or to design clinical trials.

In response to the recent attention given to cases of conflict of interest or the appearance of such conflict, many institutions now require full disclosure on the part of their faculty. Typically a threshold of significance – - $5,000-$10,000 for example – is often used. This is in addition to the requirements of most journals from their authors. Research consortia may also have their own disclosure requirements.

In response to worry over the appearance of conflict, many academics are simply not supplementing their earnings with the occasional lecture or consultation. It is simply easier to avoid future conflict and accusations.

Not everyone agrees that this trend toward avoiding any appearance of conflict is an unalloyed boon to medical research. Elizabeth Whelan, writing in the Washington Times, points out that frivolous accusations of conflict has a “chilling” effect on the conduct of industry research. We all benefit from a productive a collaborative relationship between industry and the best minds in medical academia. Scaring off these experts from the most minor relationship with industry could hurt the quality of research.

There is also a growing trend to use the accusation of conflict, even when it is slight and questionable, to dismiss the findings of research inconvenient to ones own interests or ideology. For example, those who claim, falsely, that there is an association between vaccines and autism have used the slightest appearance of conflict to dismiss the evidence against any role of vaccines in autism.

There also seems to be an unfair asymmetry. While mainstream medicine is wrangling with this thorny issue, those on the fringe may ignore their own conflicts. For example, Andrew Wakefield is accused of receiving consulting fees from attorneys who were suing for vaccine injuries while he was conducting research supporting such a connection.

Conclusion

The issue of conflict of interest in medical research is a serious one. The medical profession needs to resist the urge to “circle the wagons” while taking a serious and open look at this issue. I think they have largely done so, but more work is to be done.

However, we should not go so far as to stigmatize even minor connections to industry or allow the weeding out of genuine conflicts turn into a thoughtless witch hunt. We should further be critical of those who use the accusation of conflict where one does not genuinely exist as a weapon against research the results of which they dislike.

As in many things, a proper balance is optimal. I think we are evolving toward a reasonable balance. Academics and experts should be able to provide their experience and expertise to industry and be appropriately compensated for their time and efforts. But industry should be prevented from putting their thumb on the scale of medical research, to gain the outcomes they desire by the application of funding and fees.

This should not be an intractable problem as long as we are thoughtful, consider all sides of the equation, and don’t allow bias and ideology to rule over reason.

Posted in: Clinical Trials, Medical Ethics

Leave a Comment (15) ↓

15 thoughts on “Conflict of Interest in Medical Research

  1. David Gorski says:

    For example, Andrew Wakefield is accused of receiving consulting fees from attorneys who were suing for vaccine injuries while he was conducting research supporting such a connection.

    Let’s not forget the father-and-son tag team of antivaccinationists, Mark and David Geier, as well. They do dubious research trying to support the contention that the mercury in thimerosal in vaccines causes autism. Meanwhile, Dr. Mark Geier’s raking in the bucks hand over fist “treating” autistic children with a pseudoscientific protocol of chelation therapy plus androgenic blockade with Lupron based on this concept that mercury in vaccines causes autism and his twisting of Simon Baron-Cohen’s research on abnormalities in androgens in autism, as well as taking fees for serving as an “expert” witness in lawsuits against vaccine manufacturers or claims against the National Vaccine Injury Compensation Program. In addition, his son David runs a company whose main purpose is to provide expert testimony support to lawyers suing vaccine manufacturers, and both have been severely reprimanded for improprieties in data collection in data collection. Worst of all, the Geiers run what is basically an elusive organization known as the Institute for Chronic Illness, which appears to be no more than a front to allow them to set up an ethically-challenged institutional review board to rubberstamp their research studies on autistic children, as Kathleen Seidel so ably documented. If the mercury-autism and vaccine-autism hypotheses disappear, so does their livelihood.

    As for Andrew Wakefield, he not only received funding from trial lawyers for his dubious research linking the MMR with “autistic enterocolitis,” but had a rival vaccine to the MMR under development that he hoped to profit from. Brian Deer documented the whole sordid affair here and here. There are more examples of people like Wakefield and the Geiers, who profit off of pushing the vaccine-autism myth.

    Antivaccinationists do indeed have a double standard. They will castigate legitimate vaccine researchers such as Paul Offit and Eric Fombonne, whose research has failed to find any link between mercury and autism or vaccines in general and autism, because they have on occasion accepted funding from vaccine manufacturers to support their research or consulted for such companies, while worshiping Wakefield and the Geiers as martyrs persecuted by the scientific establishment. I’m not saying that Offit’s, Fombonne’s, or anyone else’s funding sources shouldn’t be considered in evaluating their research, but for antivaccinationists, such considerations are always a one-way street.

  2. qetzal says:

    [N]o one would reasonably disagree with the contention that companies writing research on their own products represents an unacceptable conflict of interest.

    I hope that isn’t what you meant to say, because I strongly disagree.

    Company scientists have as much right to publish their research on company products as anyone else. Their affiliation with the company and any other conflicts of interest should certainly be disclosed, and those conflicts should be considered when others evaluate the work, but they should not be a barrier to publication.

    If you meant to say that companies ghost-writing research is unacceptable, then I fully agree.

    I would also note that some of the actions being alleged re Vioxx go beyond questions of simple conflict. In particular, agreeing to lend your name as an author (esp. 1st author!) on a paper you didn’t really contribute to is academic misconduct.

  3. David Gorski says:

    If you meant to say that companies ghost-writing research is unacceptable, then I fully agree.

    I’d have to second what qetzal said. Ghost-writing is absolutely unacceptable. Publication with full disclosure of the funding source and who did the research should not be unacceptable, even if the research was pharma-funded. Disclosure is what’s important.

  4. David Gorski says:

    I would also note that some of the actions being alleged re Vioxx go beyond questions of simple conflict. In particular, agreeing to lend your name as an author (esp. 1st author!) on a paper you didn’t really contribute to is academic misconduct.

    The question is whether this accusation is really true. Some of the authors are denying it. Sadly, I haven’t had a lot of time to dig into the issue deeply enough to figure out in sufficient detail for a post whether I believe these papers in JAMA or not and do justice to the issue in a post. (It would take a lot of research.) It does give me pause, however, that several of the authors of both papers are either consultants or expert witnesses for the plaintiffs in lawsuits against Merck over Vioxx.

  5. Yes – I meant without full disclosure – thank you.

  6. BlazingDragon says:

    A quick check of google (“elizabeth whelan” “washington times”) shows that she could be an industry shill, and the Washington Times is a known right-wing, pro-corporate, anti-regulation venue (started and operated at a huge loss by the Reverend Sung Myung Moon).

    Your point is very valid, but Elizabeth Whelan is not the person you should be linking to on this issue. She has the appearance of a right-wing, industry-is-always-right nutjob.

    http://www.sourcewatch.org/index.php?title=Elizabeth_Whelan

  7. overshoot says:

    Elaborating on the question of industry researchers and publication:

    There seems to be a consensus that it would be wrong to block researchers employed in industry from publishing, but the tone of the statements seems to emphasize that this would be unfair to them — which is true, but secondary.

    Far worse is that it deprives the world of their research. This is, after all, supposed to be science, and no research is the Last Word On The Subject [1]. I don’t care if Dr. Evil of Evil, Inc. is the author of a paper for Science on a novel mechanism for triggering nova expansion in G0 stars. Let him publish despite his manifest conflict of interest, else others won’t be able to verify, disprove, or build on his work.

    This whole business of “conflict of interest” is only significant if the paper is being cited as authoritative in itself. In good science, there is no such thing.

    [1] Probably the difference between the psychology of science and the psychology of woo: the latter is still solidly in the epistemology of authority rather than sceptical empiricism. This is consonant with a religious worldview, which is why the term “sectarian” is so apt.

  8. robertoscunha says:

    Have you seen this blog?

    http://medicalevidence.blogspot.com/

    I actually found it when googling for this blog. Most of the posts I read actually deal with conflict of interest, and “Big Pharma” methods. I bookmarked it, but new posts are not that frequent. Anyway, I think you could check it out.

  9. durvit says:

    It is difficult, Blazing Dragon. Dr Thomas Stossel has similar connections and wrote a similar piece to Whelan’s, last year: Drugs and Demagogues. Essentially, Dr. Stossel asserts his confidence in his own moral barometer. It’s an enjoyable read if you relish well-deployed classical references (which I do) but disappointing if you were hoping for guidance on how impeccable moral barometers can prevent acknowledged problems such as the disproportionately positive results that are reported for industry sponsored trials.

    Back in Feb. Janet Stemwedel reported a disquieting incident involving a reviewer who may have experienced a conflict of interest.

    The reviewer, Steven M. Haffner, a professor of internal medicine at the University of Texas Health Science Center at San Antonio, broke the journal’s confidentiality rules by faxing a copy of a review of studies on the diabetes drug Avandia to a colleague at GlaxoSmithKline, the pharmaceutical company. Dr. Haffner has received consulting fees and speaker’s honoraria from the company…

    Dr. Haffner told Nature, “Why I sent it is a mystery. I don’t really understand it. I wasn’t feeling well. It was bad judgment.” [The Chronicle of Higher Education

    Unhelpfully, yes, I do feel that the search for bias has gone too far but it is difficult to have faith in moral barometers. Nonetheless, it was saddening but perhaps inevitable that a recent piece on doctors who refuse industry pay reported that they felt freer and less under the taint of suspicion, but perhaps also more unwilling to be involved in researching treatments.

  10. BlazingDragon says:

    I’m not saying the original point is not valid: We can (and often do) swing the pendulum too far the other way with situations like these. There is a tendency for humans to believe in “black or white” solutions to problems that have a lot of shades of gray. Hence the ease with which straw-men and red-herring arguments work on the public at large. The false-dichotomy argument is particularly galling to me (and damaging to getting useful things done).

    The problem that I had is with the source. The “Moonie” Times is a known outlet of industry hack-itutde and right-wing propaganda, run by a guy who literally thinks he is an incarnation of Jesus Christ.

    As for trusting people’s moral compass: It’s a sure-fire guarantee of failure. I remember hearing on NPR a long time ago about an expert on insulating businesses from embezzlement and internal fraud. He said something to the effect of “if you have a single point of failure, especially on a crime that doesn’t directly and visibly hurt people (like robbery vs. fraud), a lot of otherwise “normal” people will take the opportunity to commit fraud or embezzlement.”

    Most people will be able to resist the temptation, but a surprising number will not be, including a bunch of people who are otherwise law-abiding and good people. It’s just the way humans are wired. We need to set up systems where the light of exposure and multiple points of control make getting caught a near-certainty before we can avoid most “moral weaknesses.”

  11. overshoot says:

    BlazingDragon:

    It’s just the way humans are wired. We need to set up systems where the light of exposure and multiple points of control make getting caught a near-certainty before we can avoid most “moral weaknesses.”

    Or in other words, Murphy’s Law applies to human endeavors. In the instant case, Murphy’s Law applies to medical research.

    To me, as an electrical engineer, this is one of those things that fall into the “surprise that anyone could have thought otherwise” department. Science, if you want to look at it that way, is a “communications channel” from the Universe-as-it-is to our understanding. It’s a very error-prone channel, and as any good channel design does it contains error correction mechanisms. Peer review being a rather basic one. Well, there is a class of error that primarily affects the error-correction functions (no, really?) and a good system takes that into account as well — which is what the “conflict of interest” protocols attempt to do.

    Error correction trades off speed for accuracy. Good error correction avoids trading off too much.

  12. BlazingDragon says:

    You’re thinking like an engineer overshoot. Most people believe that “morality” alone can control a great temptation. It’s funny how the people who push the “morality alone” usually benefit from the situation where they have a single control point. Error detection and correction are NOT the norm for human affairs (i.e. politics, almost anything to do with ethics, etc). The only time error correction is actually used is when a single mistake causes major, direct problems (like your spacecraft going off course due to a spurious signal giving the wrong thruster power). If the problems are put off in time (sometimes even by a day or two), humans like to rely on morality.

    Such “sunlight is the best disinfectant” policy SHOULD be the norm any time one group has power (whether financial or moral) over another group of people. Yet most of the time, people actually rely on “morality” to be the major check on such potential abuses. People never seem to learn either. They passed the Sarbanes-Oxley act after the Aurthur Anderson/Enron melt-down, and companies have been complaining about the extra controls ever since (having multiple check points creates needless paperwork and costs us money! Trust us, we’re moral enough we won’t do that again!). It’s enough to give one a headache.

  13. darwiny says:

    The RSS feed for this blog is not working – XML parse errors. I haven’t been able to read the site via RSS all week – and I *LOVE* this blog!

  14. overshoot says:

    BlazingDragon:

    You’re thinking like an engineer overshoot. Most people believe that “morality” alone can control a great temptation.

    Damn straight I am. Engineering and medicine (and I’m a volunteer emergency medic) are two of the human endeavors where Murphy manifests most obviously. Much of society has gotten comfortable enough that people can go for years without having the Fickle Finger of Fate remind them that the Universe is ruled by the Second Law of Thermondynamics.

    Medicine and engineering? Good luck going a week without having your face ground in the ugly side of things. Especially where humans are in a position to screw things up from the best of intentions.

    I know people who have a term of art for people who, as you write, ‘believe that “morality” alone can control a great temptation.’ They call them “Johns” and their associates call them “Marks.”

  15. daedalus2u says:

    I wouldn’t call peer review an “error correction” mechanism. It is more an error prevention mechanism.

    There are two classes of errors. There is the Type 1 error, the false positive, the error in wrongly identifying a false instance as positive. There is also the Type 2 error, the false negative, the error in missing the correct identification of a correct instance.

    Most politics and quacks engender another type of error. There is no generally accepted definition of what is a Type 3 error, but one definition is “the error committed by giving the right answer to the wrong problem”.

    In the case of Wakefield and the Geiers, the question they were trying to answer was “How can I make a lot of money from the system set up for compensating victims injured by vaccines”. The “correct” answer Wakefield found was “Fraudulently claim to find measles vaccine virus in the gut of autistic children, claim MMR caused the autism, then sell snake-oil and quackery”. The answer Geier found was slightly different. “Claim mercury and testosterone cause autism, give an 18 year old man enough Lupron to turn him back into a boy and claim he is “better” because he masturbates less.”

    What constitutes an “error” is very much in the eye of the beholder. For many people, anything that makes them a lot of money can’t be an “error”.

    Peer review and IRBs and the type of oversight that people enter into voluntarily can only deal with errors of Type 1 and Type 2. They can’t deal with errors of Type 3, where people are lying and trying to commit fraud. People committing fraud will never voluntarily enter into oversight agreements that have the possibility of preventing that fraud. The IRB that Geier used was a complete joke. Wakefield did experiments on children with no consideration of what his ethical responsibilities were.

Comments are closed.