Conflicts of interest in science-based medicine

The topic of conflicts of interest among medical researchers has recently bubbled up to the public consciousness more than usual. The catalyst for this most recent round of criticism by the press and navel-gazing by researchers is the investigation of Senator Charles Grassley (R-IA) of nine psychiatric researchers, one of which held $6 million in stock in a company formed to bring a drug for depression to market, but had allegedly concealed this, even though he was an investigator on an NIH grant to study the drug he was developing. From my perspective, there is more than a little politics going on in this story, given that for the last decade federal law, specifically the Bayh-Dole Act, and policy have actually encouraged investigators and universities to co-develop drugs and treatments with industry, but it does bring into focus the issue of conflicts of interest, in particular undisclosed conflicts of interest. There are two articles of note that recently appeared in the scientific literature discussing this issue, one in Science in July (about the Grassley investigation) and an editorial in the Journal of Psychiatry and Neuroscience by Simon N. Young, PhD, the Co-Editor-in-Chief of the journal and faculty at McGill University. I was more interested in the latter article because it takes a much braoder view of the issue. Science-based medicine (SBM) depends upon the integrity of the science being done to justify treatments; so it’s useful to discuss how conflicts of interest intersect medical research.

In most public discussions of conflicts of interest (COIs), Young notes, the primary focus is on payments by pharmaceutical companies to investigators. Make no mistake, this is a big issue, but COIs are not just payments from drug companies. Indeed, I’ve written about just such COIs that have arguably impacted patient care negatively right her on this very blog, for example seeding trials (in which clinical trials are designed by the marketing division of pharmaceutical companies), a case of fraud that appeared to have been motivated by COIs. What needs to be understood is that every single scientific and medical investigators have COIs of one sort or another, and many are not financial. That’s why I like Young’s introduction to what COIs are:

A COI occurs when individuals’ personal interests are in conflict with their professional obligations. Often this means that someone will profit personally from decisions made in his or her professional role. The personal profit is not necessarily monetary; it could be progress toward the personal goals of the individual or organization, for example the success of a journal for a publisher or editor or the acceptance of ideas for a researcher. The concern is that a COI may bias behaviour, and it is the potential for bias that makes COIs so important. Before getting into the specifics of COIs, I will describe some of the research on the biases we all have, the evidence that we are not always aware of our own biases, how biases can be created by vested interests and how people behave in response to revela- tions of COIs. The idea that scientists are objective seekers of truth is a pleasing fiction, but counterproductive in so far as it can lessen vigilance against bias.

Oddly enough, financial COIs are probably the easiest to deal with as far as a practical matter. Here, transparency is key, because it’s not necessarily COIs per se that can destroy the credibility of a study. Rather it’s the COIs that are not disclosed (henceforth referred to as undisclosed COIs, or UCOIs) that cause the most problems. If I read a paper and in the paper it is disclosed that the first author received funding from the pharmaceutical company that makes the drug being studied, that in and of itself does not invalidate the study, the simplistically nonsensical arguments of, for example, Dr. Jay Gordon notwithstanding, who of late has been claiming that the promotion of H1N1 flu vaccination is far more due to pharmaceutical companies’ drive for profits than it is based on scientific and public health considerations and who likens vaccine manufacturers to tobacco manufacturers promoting junk science. Good science trumps COIs, and, in spite of how badly they are often castigated and some of their misdeeds in the past, pharmaceutical companies do fund a lot of good science. That being said, pharma funding of a study does make me look at it more skeptically than I would if it were funded by another source, mainly because there is a direct financial incentive, often with hundreds of millions of dollars invested in the development of a drug on the line, to have a good result. For these COIs, daylight is the best remedy. If readers don’t know about a financial COI, they can’t judge how important that COI is or is not or how it might impact a study.

On the other hand, Young points out that, while disclosure is the primary means by which scientific organizations and journals deal with COIs, this is not a bullet-proof solution. True, there is evidence that disclosure does work to have the intended effect, and Young cites a study in which two manuscripts were given to two groups of reviewers to read, one to each group. The manuscripts’ content was identical, but one listed COIs and the other did not. The results showed that those reading the manuscripts with the disclosed COIs found the study reported in the manuscript to be “less interesting and less important.” However, it turns out that disclosure may not always be effective due to paradoxical effects. First, authors who declare a COI may feel that their declaration “frees” them to be less objective and argue their point more strongly. Also, authors may feel that the weight of the declared COI makes it necessary for them to exaggerate the significance of their results in order to overcome any additional skepticism in the reader provoked by the COI. Finally, readers may actually be influenced by statements and information they know they should ignore, the COI disclosure notwithstanding.

Far more difficult to quantify are non-financial COIs. Let’s start with one COI that each and every researcher can be reasonably assumed to have, and that’s a strong desire to have one’s ideas and hypotheses validated by science and accepted by the scientific community. I originally became a scientist, as well as a physician, in order to study cancer and to develop more effective therapies for cancer than what we currently have. To achieve that goal, I have to develop hypotheses that accurately describe a phenomenon and make useful predictions. I can see how easy it is to become emotionally attached to my preferred hypothesis, especially since I can see that if I’m right I will have a rare chance to improve significantly the care of breast cancer patients. If my hypothesis is wrong it could well be back to the drawing board for one of the two projects that my lab works on, my career half over and not a lot of time to make a mark. (Bringing an idea to fruition in terms of treatment can easily take 10 or 20 years, and I probably have about 20 years left in my career if I stay healthy and productive.) Add to that my desire to be perceived as a good researcher and for my strategy for treating breast cancer that I’m working on (but have not yet published on) to take hold, and it’s easy to see that I have to be very, very careful not to let such considerations sway me. For a scientist, few things are more painful than to see a cherished hypothesis fall under the weight of opposing evidence.

I recently wrote about how a very common procedure for osteoporotic vertebral fractures, vertebroplasty, had recently been subjected to two significant clinical trials, one from the U.S. and one from Australia, both of which found vertebroplasty to be no more efficacious at relieving pain than placebo. Indeed, I even likened it to acupuncture and briefly recapped its history, in which small, lower quality studies (unblinded, no controls, or other methodological shortcomings), along with physicians’ anecdotal experience, led to the perception that vertebroplasty is efficacious. Consider this quote I included in my post from one of the early boosters of vertebroplasty, Dr. Joshua A. Hirsch, director of interventional neuroradiology at Massachusetts General Hospital in Boston:

“I adore my patients,” Dr. Hirsch added, “and it hurts me that they suffer, to the point that I come in on my days off to do these procedures.”

Never underestimate the desire of physician-scientists and physician-investigators to help their patients. Trust me, we get a rush when we can dramatically help a patient. Scientists get the same rush when their hypotheses are confirmed. It’s one of the rewards that motivate us. In Dr. Hirsch’s case, he even declared that he “believed in clinical trials” but that he really believed that vertebroplasty worked.

Here’s where basic human psychological considerations then start to mix with more tangible COIs. Let me use myself as an example If my hypothesis is not falsified, then there’s the potential for more publications, more grants, and much more prominence in the breast cancer research community than I currently have, which, admittedly, is relatively modest. More importantly, if my hypothesis is correct, the results of my research might improve the survival and ease the suffering of millions of women with breast cancer. Here, emotional attachment to a hypothesis can merge with other intangible COIs (fame, prominence, the respect of one’s peers) and tangible COIs (more grants, more publications, promotions, and awards). Any researcher who claims that these things don’t sway their decisions and thinking or don’t contribute to bias is either self-deluded or lying. Indeed, this pride of ownership of their own research can lead to overselling it or to downplaying its shortcomings:

This presumably is responsible for the fact that when authors were interviewed about their published papers “important weaknesses were often admitted on direct questioning but were not included in the published article.”46 Certainly editors are used to ask- ing authors to mention the limitations of their studies and to be more cautious about the implications of the research. Another related factor is the desire for researchers to advance their careers and get recognition from their peers. Research suggests that social and monetary reward may work through both psychological47 and neuroanatomical processes48 that overlap to some extent. The big difference in relation to COIs is that social rewards, unlike monetary rewards, cannot be disclosed in any meaningful way.

Indeed they can’t, even though scientists are human after all, and therefore subject to the same human needs as any other.

Another point that comes up in behavioral research about COIs is that human beings do not know their own minds very well. They think they do, but they really do not, which may account for just how vehemently so many researchers deny that financial support from drug companies or elsewhere affects their decisions. Ask yourself how many times you’ve heard a researcher claim that, yes, he has a COI but he would never, ever be influenced by that. He can remain objective in spite fo that. As Young summarizes:

A recent short review in Science asks how well people know their own minds and concludes the answer is not very well.3 This is because “In real life, people do not realize that their self-knowledge is a construction, and fail to recognize that they possess a vast adaptive unconscious that operates out of their conscious awareness.” Wilson and Brekke4 reviewed some of the unwanted influences on judgments and evaluations. They concluded that people find it difficult to avoid unwanted responses because of mental processing that is unconscious or uncontrollable. Moore and Loewenstein5 argue that “the automatic nature of self-interest gives it a primal power to influence judgment and makes it difficult for people to understand its influence on their judgment, let alone eradicate its influence.” They also point out that in contrast to self-interest, understanding one’s ethical and professional obligations involves a more thoughtful process. The in- volvement of different cognitive processes may make it difficult to reconcile self-interest and obligations. MacCoun,6 in an extensive review, examined the experimental evidence about bias in the interpretation and use of research results. He also discussed the evidence and theories concerning the cognitive and motivational mechanisms that produce bias. He concluded that people assume that their own views are objective and “that subjectivity (e.g., due to personal ideology) is the most likely explanation for their opponents’ conflicting perceptions.”

In other words, when it comes to COIs, we as human beings are in general very, very poor at judging how much we are being influenced by such considerations, because self-interest is primal and functions at very basic and largely unconcious areas of our minds. Again, this is why so many scientists will deny that they are being influenced pharma funding and why physicians will vehemently deny that their prescribing choices are influenced by gifts received from drug companies. They really, really believe it, too. They’re not lying. They’re self-image is that they are rational and that they can separate these COIs from their scientific and medical decision-making process, but behavioral research would argue otherwise.

So does all of this mean that the “complementary and alternative medicine” (CAM) crowd or the quacks of the world are right? Is science-based medicine so hopelessly compromised by COIs and bias that it can be discounted in favor of fairy dust like homeopathy, reiki, and other magic? Of course not. Reality is reality, science is science, and evidence trumps all, even COIs. Here, I think it’s instructive to contrast how science-based medicine is dealing with COIs now with how CAM advocates deal with them. However halting, messy, and at times inadequate science-based medicine deals with COIs, it is as nothing compared to how poorly CAM advocates deal with them. Let’s contrast how CAM advocates and pseudoscientists deal with COIs. Basically, they don’t. To them, COIs only matter if the person with the COI is someone they don’t like. The aforementioned Dr. Gordon likes to state over and over again that if a study is funded by a drug company he doesn’t believe it. I’ve tried over five years to convince him that this is poppycock. Yet, when it comes to COIs, few can match CAM researchers.

Indeed, as long as the investigator or physician with a COI is on the “right” side, COIs can be completely overlooked. For example, the patron saint of the anti-vaccine and autism quackery movement, Dr. Andrew Wakefield, not only received large amounts of money from trial lawyers before doing his “research” in 1998 that sparked the MMR scare but he had filed for a patent on a vaccine to compete with the existing MMR, as investigative journalist Brian Deer has reported. The result was incompetent and possibly even fraudulent research. Yet the anti-vaccine crank blog Age of Autism circles the wagons around Wakefield whenever he is criticized and even awarded him its “Galileo Award” for 2008, where Mark Blaxill likened the “inquisition” against him for his COIs and incompetent research to the Inquisition that ultimately forced Galileo to recant, likening the scientific complaints against Wakefield for his gross incompetence and undisclosed COIs to a “religious war,” writing this gag-inducing bit of desecration of Galileo’s memory:

I wouldn’t in any way diminish the importance of Galileo, but in an interesting way, Wakefield’s steadfastness in the face of adversity outshines the man in whose name we honor him. For, although Galileo finally agreed to recant his support for heliocentrism, Wakefield has never buckled under the pressure. Instead he has stuck to his guns and continued to fight for families with autism.

I apologize for that one and ask for your forgiveness if you now find yourself praying to the porcelain god, disgorging the contents of your upper GI tract as an offering. Meanwhile, at the risk of causing a repeat trip to worship, I will point out that just last month the granddaddy of all anti-vaccine groups, with the wonderfully Orwellian name National Vaccine Information Center, gave Wakefield its Humanitarian Award. What can we conclude from this? Basically, in SBM UCOIs, once discovered, usually get you castigated, particularly if there is even a whiff of fraud. In quackery land, they get you awards, regardless of whether there is fraud. There are numerous other examples of “alternative medicine” practitioners with nearly as massive COIs.

Clearly, there are at least two huge differences between pseudoscience and quackery versus SBM. In SBM, scientists try very hard to falsify their hypotheses in order to test their validity; in contrast, among pseudoscientists and quacks, rarely do “investigators” actually “test” anything. Rather they look for confirmatory evidence to support their beliefs and view themselves as virtuous underdogs fighting for their patients for having done so, even as they subject them to useless or even harmful quackery. More telling is the reaction to COIs. This is where the hypocrisy of CAM supporters comes into full relief. AoA bloggers will blast, for instance, Paul Offit as Dr. PrOffit, and cranks like Robert F. Kennedy, Jr. will call him a “biostitute” for having received royalties from a drug company for the vaccine his lab invented. They’ll blast scientific journals for accepting too much advertising from drug companies. Yet, on AoA itself are numerous ads for compounding pharmacies, supplement sellers, gluten-free diets, and all manner of other unproven “treatments” for autism. Another example, Dr. Joseph Mercola, routinely castigates SBM for COIs while at the same time selling all manner of supplements and woo on his website. To such as J.B. Handley and his merry band of anti-vaccine cranks at Generation Rescue and AoA or Dr. Mercola, it’s only a COI if they say it is. Indeed, the right kind of COI can even make you a hero in their eyes, and that’s not even counting the positive reinforcement given someone like Wakefield by legions of adoring anti-vaccinationists.

The bottom line is that COIs do matter. Because science is a human endeavor, it will never be perfectly pristine, because nothing humans do is perfectly pristine. Moreover, SBM hasn’t always understood or handled COIs, both disclosed and undisclosed, real or perceived, very well, to the point where new regulations by the government may well be necessary. Moreover, there are always more intangible COIs, such as pride of ownership of research, the desire to be proven right, and the respect of one’s peers that, unlike financial COIs, can’t be quantified. As Young says in his article, the “objective of a literature relatively free of bias remains a pious but distant hope.”

Even so, I’ll paraphrase Winston Churchill’s (in)famous comment about democracy in describing SBM by saying that science-based medicine is the worst form of medicine, except for all those others that have been tried. In particular, that includes dogma-based medicine or anecdote-based medicine, two dominant forms of medicine that have been practiced since the days of ancient Egyptian physician-priests to ancient shaman medicine men through the days of barber-surgeons using bleeding as a treatment for almost everything to the physician of 200 years ago advocating purging and treatments with toxic metals like cadmium and antimony. Progress in medicine was glacial, with few advances over decades or even centuries, until science was seriously applied to medical investigation in a serious and systematic way beginning in the 19th century and exploding in the 20th. The last 50 years have seen incredible advances in medical care, thanks to science.

True, SBM is not perfect. Financial interests, COIs, and the pride of individual practitioners undermine it, and there are a depressing number of ills that it offers too little for. But it’s so much better than any alternative we have tried before. It works, and, although it does so in fits and starts, sometimes all too slowly, it’s getting better all the time. Dealing more effectively with COIs will only help it to continue to do so.

Posted in: Clinical Trials, Medical Academia, Politics and Regulation, Vaccines

Leave a Comment (24) ↓

24 thoughts on “Conflicts of interest in science-based medicine

  1. weing says:

    I don’t trust anyone. Government funded studies also suffer from COI and often presumed not to exist. As the government pays more for health care, it will be trying to save money. Could research showing generic drugs are equivalent or better than branded or newer more expensive meds get favorable funding?

  2. provaxmom says:

    I think much of COI can be determined by payment. As in the case of Dr. Offit, he received his Rotateq money after years of research. Dr. Wakefield, otoh, received his money prior to the study being done.

    Yes, the government will be trying to save money and no one should fault them for that. But weing, which do you think is cheaper–giving a vax for something like bacterial meningitis, or treating it after the fact?

  3. weing says:


    That’s a situation where saving money is good for everyone involved.

  4. Dacks says:

    I wonder how much cultural norms contribute to overselling the strength of the data (“acCENtuate the POSitive”) and unwillingness to admit errors. George Bush, recorded during a 2004 press conference, famously said he could not recall a single mistake he had made since 9/11. I believe he spoke truly – as a culture we spend much more energy denying our mistakes than learning from them.

    Added to this is the subculture of the scientific community, where attempts to falsify a colleague’s work (not maliciously, but based on evidence) are an integral part of the process. A PhD candidate “defends” his/her thesis – surely this can not be done without personal bias towards the correctness of their own conclusions.

    As a bystander to the scientific process, I am amazed at how much good work gets done, despite the human tendencies working against it.

  5. windriven says:

    Conflicts of interest are a fact of life – and not just in medical research. Transparency and vigilance are paramount. But that being said, science is uniquely structured to grind out the truth.

    Science has developed rigorous rules for how studies should be structured. Then results are submitted for peer review before they are published. Then other scientists – some of whom may have different theories – have an opportunity to poke holes in the original work, or to offer competing hypotheses. And finally, results have to be reproducible; other scientists should be able to replicate the experiment or study and obtain similar results. The process is often as slow as it is certain. But there is no known better system of discovering truth.

    Except perhaps CAM. Their method seems to work something like this: Chinese people have long used a concoction including, among other things, tiger penises as a tonic and aphrodisiac (this is true – I have actually tasted the stuff myself). There are a lot of Chinese. Therefore, concoctions made with tiger penises must be a powerful aphrodisiac.

    Who can argue with logic like that?

  6. tacksomfan says:

    I would like to hear a response to Bill Maher’s blog since you’ve been bashing him quite extensively in your blog.

    I think he has a point since diet and a healthy lifestyle usually makes your immune system much stronger and can minimize the effects of any flu. I have never taken a flu shot in my hole life and if I occasionally get one it is not much worse than a really bad cold. But then again, I don’t know anyone else who lives a healthier life than me with rigorous exercise and very a balanced diet without gluten, sugars and any preservatives.

  7. David Gorski says:

    “Has a point”? That post by Maher is so chock full of canards that it will be a project to dismantle them all. Come on! He cites Russell Blaylock (a conspiracy theorist nut), Barbara Loe Fisher (the grande dame of the anti-vaccine movement), and Dr. Jay Gordon as making a lot of sense, fer cryin’ out loud!

    Basically, Maher’s digging himself in a much deeper hole. Responding to him will be very fun indeed, so much so that I may have to do it at my other blog…

  8. Ali771 says:

    “Good science trumps COIs, and, in spite of how badly they are often castigated and some of their misdeeds in the past, pharmaceutical companies do fund a lot of good science.”

    The New York Times recently published an article by Gina Kolata entitled Medicines to Deter Some Cancers Are Not Taken. If the public shys away from potentially helpful pharmaceuticals these days, I fear that one main reason they may do so is because of COIs that have come to light in the press over the past few years. Namely financial, what comes to mind for me (and I believe many others) is Merck’s Vioxx problem. Of course good science trumps COI’s, but what if the sheep in wolf’s clothing is harder to see than we think? While bogus claims on a bottle of some herbal cure-all are at least stamped with a required FDA warning, “This statement has not been evaluated by the food and drug administration…” we are not given any heads-up when a COI is still in its UCOI state and the consequences can be just as devastating as false claims. The wolf in sheep’s clothing (bias masquerading as good science) becomes equally the boy who cried wolf – as the public’s trust diminishes, with it goes access to helpful medicine.

  9. provaxmom says:

    tacksomfan-I’m not sure what Bill Maher has to do with a COI post. But I’ll bite.

    That’s great that you’ve never gotten a flu shot your whole life and have never gotten the flu (other than perhaps the grammar flu ;) ). I’m a humanist and I believe we’re all in this together. If I get the flu, I would probably survive after a few days’ worth of inconvenience. But to my special needs son, or to my next door neighbor going thru chemo, it’s not “just the flu.” I get vax’d more for them than for me.

    As for posting Maher’s blog, I think you’ve just given SBM a bit more fodder.

  10. Scott says:

    I think he has a point since diet and a healthy lifestyle usually makes your immune system much stronger and can minimize the effects of any flu.

    This statement requires robust evidentiary support. Got any?

  11. weing says:


    Sorry but your diet and exercise will not protect you. How about an experiment? I get the rabies vaccine and you don’t. We then expose ourselves to a rabid dog. Let’s see how we do.

  12. Ali771 says:

    Oops! Ok I realize “boy who cries wolf” is a very poor analogy – since I am referring to a real problem and not a fictional one. Still, hopefully I have gotten my point across: I think that a harmful pharmaceutical that has suffered due to a UCOI can be just as damaging as a bogus claim; because the good science may be harder to detect than we think, and it is a hard thing to earn back public trust when it has been shaken.

  13. provaxmom says:

    Ugh. I can’t stomach it. After he calls people with DS either a “fuck up or a design flaw” I had to quit.

  14. SF Mom and Scientist says:

    I really wish I didn’t read that Bill Maher post. I think I’m going to lose my breakfast.

    My biggest beef is when he says we need to listen to an “alternative”, “medical point of view”. If a “medical point of view” is not supported by data, it is not a real point-of-view.

  15. David Gorski says:

    OK, back to COIs…there are plenty of posts about Bill Maher on SBM where his latest screed can be discussed. :-)

    Or is it :-(?

  16. Robin says:

    This article came out last week about a psychiatrist who was paid to promote Seroquel; it illustrates how complicated both financial and intangible COI, both on the part of the physician AND the drug company, can be…,0,6067737.story?page=1

  17. hairyape68 says:

    Lots of the CAM stuff talks about strengthening the immune system.

    Maybe someone could write a post specifically about the immune system and when it may not be at its best and what actually affects the immune system’s strength, if that even makes any sense.

    My wife has lupus, and it’s clear to me that the immune system can go haywire and cause a lot of trouble.


  18. Harriet, thanks for reminding my that I need to add a link to that post by Mark to my Good Points By other Bloggers list on my blog.

  19. micheleinmichigan says:

    “I don’t trust anyone. Government funded studies also suffer from COI and often presumed not to exist. As the government pays more for health care, it will be trying to save money. Could research showing generic drugs are equivalent or better than branded or newer more expensive meds get favorable funding?”

    My response is not a direct response to your comment, but yours sparked a thought.

    Showing financial cost and benefit is economics, (not SBM). There have been some interesting new ideas in economics in the last 15ish years. The idea is that economics has over emphasized components that have obvious monetary value. (Gas is $120 barrel) and under emphasized things that don’t have obvious monetary value (say the value of green spaces in a community). So that a more appropriate cost/gain analysis would find a value for these more intangible things before making a conclusion. Not easy, but many think it can offer a more balanced approach to environmental issue. (I believe this have been primarily a field for environmental economists.)

    I say this because on a smaller scale the generic vs non-generic debate often seems to include points such as health cost savings, or profit for drug companies and not the intangibles of the patient experience.

    For instance, the generics for synthroid (for hypothyroid) can vary slightly from the name brand. Such that my endocrinologist told me that if I switched to generics thyroid medication, I should come back in for testing. Which would start a whole new round of testing and negotiations (levels I think I feel best at vs doctors wider range of acceptable levels.) So, I my mind, the intangible overrule any change that may save me $7 a month. Now if were $100 a month, I’d consider it.

    I find it completely understandable that someone who has finally found a SSRI that works for them with acceptable side effects is upset when they are asked to change to a generic. The same goes for someone who has stabilized on name brand seizure medication which apparently can vary from the results of generic.

    So I wonder if this idea of putting a value on intangibles could be put into to play more in medical economics or if it already is being used? Not just in generics vs name brand, but other applications.

  20. hairyape68 says:

    Dr. Hall: I missed that piece on the immune system. Thanks for the reminder. I must have been off somewhere on the 25th of Sept. That’s my grandson’s birthday.

    RE Harris

  21. klh says:

    I’m very glad I found this article. I truly appreciate the candidness with which you discuss the issue of COIs, even using yourself as an example. For me, that kind of disclosure – and the admission of humanity – goes a long way to establish trust in the medical profession. Perhaps ironically, I trust that kind of self-reflection, though it exposes potential sources of weakness. I trust it because perhaps it forces greater attention paid to the matter at hand, and therefore a more critical eye when it comes to external influence. In any case, thank you.

    Having said all of that, I am extremely skeptical when it comes to the influence of industry in clinical trial research. I am your average consumer who has, slowly over the years, developed a healthy skepticism of the process – as I came to learn something about the extent of the relationships. It would concern me, for example, to learn if the president of the American Medical Association, was also a paid industry consultant and representative, sat on a committee that made important decisions about disease state thresholds and crafted clinical guidelines that contributed to physicians’ prescription writing practices. It just feels as though the reach and influence of the pharmaceutical industry is so vast (not conspiracy theory, just the sense that many have expressed) that the integrity of the science is in question because the bias seems endemic.

    I’ve not yet seen responses to Ali771 and Robin above. Ali771 touches on this pervasive distrust in consumers, the aftermath of which doctors have to deal with. For the doctors in the group, how do you deal with this in your practice? Does it at all impact how you think about – not science-based medicine itself, but industry influence in the clinical research process?

    I am here to be corrected, if I’m wrong (I am not married to my thoughts on the subject and, rather, WANT to be convinced otherwise). Doctors see the results of studies when the results are published in peer-reviewed medical journals, correct? If they could – just for the sake of argument, as I know it’s not practical – could they gain access to the raw data from the trials? If so, how? I know that Clinical Research Organizations often execute the trial – as they’re often so large, this makes sense – and that medical writers often pen the papers that the doctors then read. In the trajectory of the clinical trial – with all of these players involved in the process (all with their own interests) – is there no question that (beyond bias) the very integrity of the science could become lost along the way? I could also imagine that the further away one is from the human subjects of a study, the more likely it could be that data gets exaggerated or manipulated to show more efficacy, for example.

    Thanks for your response…

  22. Scott says:

    Having said all of that, I am extremely skeptical when it comes to the influence of industry in clinical trial research.

    The problem is, what’s the alternative? The options I see are:

    1. The status quo: the company which stands to profit from a treatment pays for the testing.

    2. Tax-funded testing: leaving aside the sheer sums for a moment, does it really make sense for taxpayers to shoulder the burden of testing when the developer gets the reward?

    3. Full nationalization of the pharmaceutical industry: at least addresses the glaring problem in #2 by ensuring that those who benefit are those who pay, but do we REALLY think that the government can make this work? I certainly don’t.

    4. No testing: um, yeah. I think it’s obvious why this isn’t even worth consideration.

    I kind of have to conclude that the current setup is like democracy or capitalism – it’s the worst way to do clinical testing, except for all the others.

    Skepticism about trial results, of course, is a very good idea. The problem arises when “skepticism” becomes code for “reject anything funded by industry without carefully evaluating its merits”.

    Ultimately, for all the people who complain about the influence of industry, I have yet to hear anyone suggest a better approach.

  23. Zoe237 says:

    Excellent analysis. While I have criticisms of SBM, I certainly agree it’s better than any of the alternatives.

    Oh no, I love Bill Maher! Didn’t realize his position on vaccines. Yikes.

    RE: taxpayers paying for studies. I have no problem paying for that. It could show a lot more benefit than some other things my tax dollars go for. And that’s why I support the NIH.

    Speaking of intangibles, the profit motive in healthcare does make me nervous. Because there are certain things that just don’t work that well in capitalism, as “for profit.” Education is one, health care is another, and that’s why I support universal healthcare. How do you quantify the price of a life? Or of an education?

    I certainly think the global warming fiasco is illustrative of conflicts of interest and scientists as human beings, and the harm these COI can have on public perception. John Tierney describes “smug groupthink” in his NY Times article about the stolen global warming emails, and says that scientists “… are so focused on winning the public relations war that they exaggerate their certitude.” The overselling of an idea for political reasons. Frustration with people who have no clue about science.

    Tierney: “Contempt for critics is evident over and over again in the hacked e-mail messages, as if the scientists were a priesthood protecting the temple from barbarians. Yes, some of the skeptics have political agendas, but so do some of the scientists. Sure, the skeptics can be cranks and pests, but they have identified genuine problems in the historical reconstructions of climate, as in the debate they inspired about the “hockey stick” graph of temperatures over the past millennium. ”

    The main problem here is one of perception. How is science viewed by the public? How are pharmaceutical companies viewed, after debacles with Vioxx? In the case of H1N1, some were relatively unmoved because avian flu and multiple other “scares” never came to light. When people feel like they can’t trust science, where do they turn (I’m sure we can all guess)?

    That’s why it’s so important to make the raw data available, be transparent, and identiy COI. I am also for making journals public access and rapid response (like BMJ). As for freedom of scientific communication, I can’t support limiting that or making private emails public, but they might want to try a prepaid phone next time. ;-)

    Personally, I’m ticked at the scientists who wrote these emails in the first place and tried to deny FOI requests and hide their data- this has possibly irreparably damaged any funding for global warming. I don’t we’ve realized yet how much damage this has done for public perception of science and conflicts of interest. Obviously it doesn’t “disprove” global warming, but people aren’t going to realize that. Kyoto probably isn’t going to get passed for ten years now.

    Same thing with evolution- non-scientists who don’t understand the process of science think that any disagreement over the particulars of evolution mean that it is “just” a theory and not relevant. That’s why we need science education for non scientists in school.

    Gorski: “Clearly, there are at least two huge differences between pseudoscience and quackery versus SBM. In SBM, scientists try very hard to falsify their hypotheses in order to test their validity; in contrast, among pseudoscientists and quacks, rarely do “investigators” actually “test” anything.”

    I’ve heard that the line of demarcation between science and pseudoscience was at one time believed to be Popper’s falsification, but that theory was replaced by Kuhn’s paradigm shifts and Lakatos’ research programmes. IOW, science works pretty well with falsification within its paradigm, but that it doesn’t work well when changing the entire paradigm (like with continental drift). Falsification works but only if the theory attempting to be falsified doesn’t challenge the paradigm itself. Any truth to this?
    Is this a fundamental COI- preserving the current paradigm at all costs?

Comments are closed.