It’s time for true transparency of clinical trials data

What makes a health professional science-based? We advocate for evaluations of treatments, and treatment decisions, based on the best research methods. We compile evidence based on fair trials that minimize the risks of bias. And, importantly, we consider this evidence in the context of the plausibility of the treatment. The fact is, it’s actually not that hard to get a positive result in a trial, especially when it’s sloppily done or biased.  And there are many ways to design a trial to demonstrate positive results in some subgroup, as Kimball Atwood pointed out earlier this week. And even when a trial is well done, there remains the risk of error simply due to chance alone. So to sort out true treatment effects, from fake effects, two key steps are helpful in reviewing the evidence.

1. Take prior probability into account when assessing data. While a detailed explanation of Bayes Theorem could take several posts, consider prior probability this way: Any test has flaws and limitations. Tests give probabilities based on the test method itself, not on what is being tested. Consequently, in order to evaluate the probability of “x” given a test result, we must incorporate the pre-test probability of “x”. Bayesian analysis uses any existing data, plus the data collected in the test, to give a prediction that factors in prior probabilities. It’s part of the reason why most published research findings are false.

2. Use systematic reviews to evaluate all the evidence. The best way to answer a specific clinical question is to collect all the potentially relevant information in a structured way, consider its quality, analyze it according to predetermined criteria, and then draw conclusions. A systematic review reduces the risk of cherry picking and author bias, compared to non-systematic data-collection or general literature reviews of evidence. A well-conducted systematic review will give us an answer based on the totality of evidence available, and is the best possible answer for a given question.

These two steps are critically important, and so have been discussed repeatedly by the contributors to this blog. What is obvious, but perhaps not as well understood, is how our reviews can still be significantly flawed, despite best efforts. In order for our evaluation to accurately consider prior probability, and to be systematic, we need all the evidence. Unfortunately, that’s not always possible if clinical trials remains unpublished or are otherwise inaccessible. There is good evidence to show that negative studies are less likely to be published than positive studies. Sometimes called the “file drawer” effect, it’s not solely the fault of investigators, as journals seeking positive results may decline to publish negative studies. But unless these studies are found, systematic reviews are more likely to miss negative data, which means there’s the risk of bias in favor of an intervention. How bad is the problem? We really have no complete way to know, for any particular clinical question, just how much is missing or buried. This is a problem that has confounded researchers and authors of systematic reviews for decades.

Fortunately, it’s difficult to do a clinical trial without leaving any clues at all. You need to write a protocol, seek some sort of ethics board approval, find researchers (often physicians) to run your study, recruit patients, and finally, fund it. Assuming most investigators don’t set out to keep a clinical trial hidden from view, there may be other clues that find their way into the public domain. For example, at medical conferences there may be hundreds of academic “posters” and other presentations of research evidence – sometimes only the interim results of research studies. Only a fraction of this research ends up into the medical literature fully published. Sometimes data emerges in lawsuits, or in drug approval applications to regulators like the FDA, who may make this data available publicly. But searching for evidence shouldn’t rely on serendipity. To conduct the most accurate systematic review, we need a comprehensive system to track down every relevant clinical trial.

The simplest approach to finding relevant studies would be to register every single clinical trial conducted – ideally before the study begins, so that negative trials can’t subsequently be hidden without leaving any clues. Requiring a simplified registration at a central, international resource should not be too onerous a requirement for investigators planning to perform human experiments. Making this information publicly available would then allow researchers to determine how many trials had been performed, and could serve a starting point for a truly systematic review. It’s a simple and fairly elegant approach, so it should be no surprise that it’s been implemented already – albeit ineffectively. More that 25 years has passed since the first major call for a registry, yet even today we still don’t have an effective one. The pharmaceutical industry, led by GlaxoWellcome, started one voluntarily, but it was subsequently shuttered. (GlaxoSmithKline has just recently announced plans to return to this level of transparency, to their credit.) The FDA passed legislation to require trial registration in 1997, and mandated the registration of “federally or privately funded clinical trials conducted under investigational new drug applications“. The well-known launched in 2000. In 2007, requirements were expanded to include all clinical trials (with the exception of early “phase 1″ studies) for all products subject to the FDA’s regulation. There have been external pushes to require registration, including the International Committee of Medical Journal Editors who implemented this requirement in 2005 for any study considered for publishing. This looked like a good strategy – researchers that avoided registering would find themselves cut out from subsequently publishing in the world’s major medical journals. There was also the World Health Organization’s mandate to register in 2006, and then the World Medical Association’s Declaration of Helsinki, which establishes ethical principles for clinical trials, which was amended in 2008 to require that “Every clinical trial must be registered in a publicly accessible database before recruitment of the first subject.”

In light of this consensus throughout the research and clinical community that registries are a Good Thing, how effectively are they working? Not as well as you might think – only one-fifth of eligible trials have actually been registered, based on an 2011 analysis.  Don’t pin this all on the pharmaceutical industry. Among the different groups of investigators, industry-sponsored research was more likely to be registered than government- or other-funded research. It seems no group is particularly good at walking the talk of registries, despite their obvious benefits to medical science. In response, a new bill has proposed further penalties, including requiring repayment of any federal (NIH) funding, and forbidding further funding, for any group that does not publish its data fully. Another proposed bill will impose FDA requirements on all international trials used to support any application made to market a drug in the USA.

Still, there continues to be a lack of motivation on the part of some regulators to embed the need for registries into their processess. Just recently in Canada, a “web-based list” of regulator-approved trials was announced by Health Canada, which was heralded as out-of-touch and underwhelming – no mandatory registration, no enforcement, and no intent to make the results of public trials available. Compared to the FDA, it’s circa-1990’s transparency and policy thinking.

Access to research information, ideally patient-level data, is what’s necessary fairly evaluate risks and benefits of therapies. This allows other researchers to look at the evidence themselves, and potentially validate findings. It’s the ultimate in data transparency. To the credit of one member of Big Pharma, GlaxoSmithKline (GSK) has now promised this level of transparency. And last week, in lauding GSK’s announcement, Fiona Godlee, editor of the British Medical Journal announced the BMJ will require this data disclosure – from all researchers as of January, 2013, for any trial published in their journal:

And amid the plaudits, a moment of doubt. Surely what this apparently brave and benevolent action really serves to highlight is the rank absurdity of the current situation. Why aren’t all clinical trial data routinely available for independent scrutiny once a regulatory decision has been made? How have commercial companies been allowed to evaluate their own products and then to keep large and unknown amounts of the data secret even from the regulators? Why should it be up to the companies to decide who looks at the data and for what purpose? Why should it take legal action (as in the case of GlaxoSmithKline’s paroxetine and rosiglitazone), strong arm tactics by national licensing bodies (Pfizer’s reboxetine), and the exceptional tenacity of individual researchers and investigative journalists (Roche’s oseltamivir) to try to piece together the evidence on individual drugs?

Data secrecy has been the norm for decades. But perhaps it’s worth asking: Just who benefits from this data paternalism? After all, who needs that information more than those prescribing those treatments – and the patients already taking the drugs in question? GSK’s announcement was an important one for a pharmaceutical manufacturer, and they have just announced their support for the BMJ’s transparency requirements. Whether this will improve the trust that health professionals have in pharmaceutical-industry sponsored research remains to be seen.  To be clear, the entire industry isn’t on board yet. In time, a genuine demonstration of transparency could help the pharmaceutical industry recover from the damage it repeatedly self-inflicts on its own credibility.

I’m cautiously optimistic, with GSK’s announcement and the BMJ’s action, that there may be some momentum building. In September, the European Medicines Agency, the EU equivalent to the FDA, announced plans to proactively publish clinical trial data, and remarkably, allow access to full data sets. Ben Goldacre’s latest book, Bad Pharma, which provides a comprehensive yet accessible summary of the evidence and the consequences of a lack of transparency, has drawn widespread media attention in the UK and seems to be getting some traction in his calls for a fix. It will be interesting to see the North American reaction to his book, when it appears here in 2013.


We know evidence is missing or inaccessible. We know how to fix it. A lack of data transparency compromises the evidence base, prevents the best science-based care, and ultimately, does a disservice to patients.  The benefits of open data on how we deliver medicine are obvious. Yet the foot dragging and hand waving continues, from manufacturers, regulators, and other interest groups. It’s shameful, and it needs to change.


Posted in: Clinical Trials, Politics and Regulation

Leave a Comment (18) ↓

18 thoughts on “It’s time for true transparency of clinical trials data

  1. Scott says:

    In addition to mandates of this sort, I believe a proper system requires that the incentives of the people involved be actively aligned with publication of data (bottom-up approaches to complement the top-down mandates, if you like). I see two major incentive problems built into the system as it stands:

    1. Tenure. Tenure committees, by and large, consider positive results more favorably than negative. This is not only unfair to candidates for tenure (no matter how capable and careful the scientist, they cannot ensure ahead of time that their hypotheses will be correct), it discourages the publication of negative results. If nothing else because the time invested in getting the negative paper written and published cannot be spent on work the committee will like better.

    This one really calls for a cultural change in science, where it is more broadly accepted that negative results are just as important and valuable as positive. It’ll be hard, but necessary. (Whether the entire tenure concept really makes sense to keep is a different question, but anything which replaces it would likely have the same problem built in without cultural change.)

    2. Journals. Simply put, papers are published in journals if it is profitable to do so. That’s just broken. Initiatives like PLoS attempt to address this, but medicine is still far behind some other disciplines in this regard (e.g. physics and arXiv).

  2. rork says:

    Phew, I was worried that since “cough up the damn data” didn’t get a bold numbered heading it was going to be forgotten.

    Every year I ask groups for data that is supposed to be public, but really isn’t, usually not for clinical trials, but rather big array data measuring mRNA abundances in tumors (and such). Sometimes the data actually is public, but the clinical variables aren’t, or I can’t tell which sample in the repository corresponds to which one in the paper’s supplementary tables so I can’t piece it together, or it’s data so raw I could never reproduce their processed data. Anyway, they often write back, asking if my group would like to collaborate, and are essentially saying they won’t give it up – I should understand how much it cost them to obtain it, blah, blah, blah.

    That is extortion. To agree to the terms is rewarding the extortioner, and will help perpetuate the practice. Never agree to it. Th extortioner is harming scientific progress and patients (unless the research has no practical future benefit – not an argument you’ll hear from many).
    As a reviewer check to see that data is available, complete, and in good order. If not, the paper cannot be adequately reviewed.

    I am always tempted to write the journal and demand the data be made public or else they should retract the paper, and sometimes I do that and with success, but usually I say nothing cause the other group will likely know exactly who the provocateur was.

  3. Ed Whitney says: does a good job of reporting a study’s inclusion/exclusion criteria, intervention groups, and primary/secondary endpoints. Rarely do its protocols commit the investigator to a method of data analysis, leaving the reader to wonder whether the researchers are going to compare the groups in measures of central tendency (means, medians, variances) or in terms of predefined success/failure (chi squares, odds ratios). Sometimes one analysis will yield the magic p value when the other does not. Committing in advance to how the data will be analyzed would be a good idea.

  4. mho says:


    this New England Journal of Medicine article proposed legislation to improve clinical trials.

    “On August 2, 2012, Representative Edward Markey (D-MA) introduced into the U.S. Congress the Trial and Experimental Studies Transparency (TEST) Act (H.R. 6272) to close these loopholes. The TEST Act expands reporting requirements under existing federal law by broadening the scope to include all interventional studies of drugs or devices, regardless of phase (i.e., including phase 1), design (i.e., including single-group trials), or approval status (i.e., making no distinction between trials of approved vs. unapproved products); requiring all foreign trials that are used to support marketing in the United States to be registered; mandating results reporting for all trials within 2 years after study completion (including trials of unapproved drugs or devices); and extending results reporting to include the deposition of consent and protocol documents approved by institutional review boards.”

  5. BobbyG says:

    Nice post. I will be citing it on my blog.

    Profit and transparency are inversely correlated. That whole pesky “efficient markets hypothesis” thing. Yves Smith has pointed out that the most transparent markets are, by definition, the least profitable.

    (“profitable” in the narrow sense of near-term net to the legally constituted private resource “owners” — not to the broader commonwealth)

  6. geo says:

    I was just reading about a publicly funded trial that cost about £7,000,000 yet the researchers are refusing to release data for the outcome measures that were laid out in their published protocol. It’s for biopsychosocial rehabilitative approaches, which has an important political aspect at the moment, as BPS reforms to the UK’s disability benefits system are being pushed through against much opposition from disabled people. These reforms are supported by the private insurance industry, which all three of the major researchers involved with this work report COIs with.

    Some of the details are here:

    Requiring researchers to publish data in the manner laid out in their protocol seems such a basic first step…. but it’s still not happening, because it’s not in the interests of a lot of those with power and authority.

  7. stanmrak says:

    Dream on…

    as long as profit is the motive, true transparency and ethical behavior will only exist in Fantasyland. Legislators, government regulators, professional journals and research facilities at all levels are bought and paid for by the likes of Monsanto and Merck. They buy the studies, they write the laws, they bury unfavorable studies, they fund the universities that do the research — they invent the ‘science.’ Their behavior throughout history clearly demonstrates that they don’t care about anything but next quarter’s bottom line.

  8. Chris says:


    Legislators, government regulators, professional journals and research facilities at all levels are bought and paid for by the likes of Monsanto and Merck.

    Argument by blatant assertion is silly.

  9. ConspicuousCarl says:

    The importance of Gavura’s first point is demonstrated in today’s XKCD:

  10. Chris says:

    And here we have a name for for stanmrak’s comment: Argumentum Ad Monsantium. ;-)

  11. Jose A Hernandez says:

    Scott, is addressing a problem that is hardly ever mentioned. I am referring to the obstacles for publication of negative results associated with the incentives within academia, specially tenure committees. Much is written about the problems created by financial incentives, which are real. This focus has the deleterious consequence of that the academic world goes free of scrutiny while, as Scot mentions, “discourages the publication of negative results.”

  12. geo says:

    There also seems to be little in the way of accountability or condemnation when results are spun and exaggerated. For many academics, this seems to just be an acceptable part of how academia works, and not something that is understood as being morally problematic.

  13. BillyJoe says:


    Good one. :)
    On top of that, he also has the defeatist’s attitude of giving up because it’s all just too hard.

  14. BillyJoe says:

    How about requiring that clinical trials be evaluated for flaws before they can be registered so as to have a larger number of highquality trials available for the systematic reviews.

Comments are closed.