Shares

What makes a health professional science-based? We advocate for evaluations of treatments, and treatment decisions, based on the best research methods. We compile evidence based on fair trials that minimize the risks of bias. And, importantly, we consider this evidence in the context of the plausibility of the treatment. The fact is, it’s actually not that hard to get a positive result in a trial, especially when it’s sloppily done or biased.  And there are many ways to design a trial to demonstrate positive results in some subgroup, as Kimball Atwood pointed out earlier this week. And even when a trial is well done, there remains the risk of error simply due to chance alone. So to sort out true treatment effects, from fake effects, two key steps are helpful in reviewing the evidence.

1. Take prior probability into account when assessing data. While a detailed explanation of Bayes Theorem could take several posts, consider prior probability this way: Any test has flaws and limitations. Tests give probabilities based on the test method itself, not on what is being tested. Consequently, in order to evaluate the probability of “x” given a test result, we must incorporate the pre-test probability of “x”. Bayesian analysis uses any existing data, plus the data collected in the test, to give a prediction that factors in prior probabilities. It’s part of the reason why most published research findings are false.

2. Use systematic reviews to evaluate all the evidence. The best way to answer a specific clinical question is to collect all the potentially relevant information in a structured way, consider its quality, analyze it according to predetermined criteria, and then draw conclusions. A systematic review reduces the risk of cherry picking and author bias, compared to non-systematic data-collection or general literature reviews of evidence. A well-conducted systematic review will give us an answer based on the totality of evidence available, and is the best possible answer for a given question.

These two steps are critically important, and so have been discussed repeatedly by the contributors to this blog. What is obvious, but perhaps not as well understood, is how our reviews can still be significantly flawed, despite best efforts. In order for our evaluation to accurately consider prior probability, and to be systematic, we need all the evidence. Unfortunately, that’s not always possible if clinical trials remains unpublished or are otherwise inaccessible. There is good evidence to show that negative studies are less likely to be published than positive studies. Sometimes called the “file drawer” effect, it’s not solely the fault of investigators, as journals seeking positive results may decline to publish negative studies. But unless these studies are found, systematic reviews are more likely to miss negative data, which means there’s the risk of bias in favor of an intervention. How bad is the problem? We really have no complete way to know, for any particular clinical question, just how much is missing or buried. This is a problem that has confounded researchers and authors of systematic reviews for decades.

Fortunately, it’s difficult to do a clinical trial without leaving any clues at all. You need to write a protocol, seek some sort of ethics board approval, find researchers (often physicians) to run your study, recruit patients, and finally, fund it. Assuming most investigators don’t set out to keep a clinical trial hidden from view, there may be other clues that find their way into the public domain. For example, at medical conferences there may be hundreds of academic “posters” and other presentations of research evidence – sometimes only the interim results of research studies. Only a fraction of this research ends up into the medical literature fully published. Sometimes data emerges in lawsuits, or in drug approval applications to regulators like the FDA, who may make this data available publicly. But searching for evidence shouldn’t rely on serendipity. To conduct the most accurate systematic review, we need a comprehensive system to track down every relevant clinical trial.

The simplest approach to finding relevant studies would be to register every single clinical trial conducted – ideally before the study begins, so that negative trials can’t subsequently be hidden without leaving any clues. Requiring a simplified registration at a central, international resource should not be too onerous a requirement for investigators planning to perform human experiments. Making this information publicly available would then allow researchers to determine how many trials had been performed, and could serve a starting point for a truly systematic review. It’s a simple and fairly elegant approach, so it should be no surprise that it’s been implemented already – albeit ineffectively. More that 25 years has passed since the first major call for a registry, yet even today we still don’t have an effective one. The pharmaceutical industry, led by GlaxoWellcome, started one voluntarily, but it was subsequently shuttered. (GlaxoSmithKline has just recently announced plans to return to this level of transparency, to their credit.) The FDA passed legislation to require trial registration in 1997, and mandated the registration of “federally or privately funded clinical trials conducted under investigational new drug applications“. The well-known clinicaltrials.gov launched in 2000. In 2007, requirements were expanded to include all clinical trials (with the exception of early “phase 1” studies) for all products subject to the FDA’s regulation. There have been external pushes to require registration, including the International Committee of Medical Journal Editors who implemented this requirement in 2005 for any study considered for publishing. This looked like a good strategy – researchers that avoided registering would find themselves cut out from subsequently publishing in the world’s major medical journals. There was also the World Health Organization’s mandate to register in 2006, and then the World Medical Association’s Declaration of Helsinki, which establishes ethical principles for clinical trials, which was amended in 2008 to require that “Every clinical trial must be registered in a publicly accessible database before recruitment of the first subject.”

In light of this consensus throughout the research and clinical community that registries are a Good Thing, how effectively are they working? Not as well as you might think – only one-fifth of eligible trials have actually been registered, based on an 2011 analysis.  Don’t pin this all on the pharmaceutical industry. Among the different groups of investigators, industry-sponsored research was more likely to be registered than government- or other-funded research. It seems no group is particularly good at walking the talk of registries, despite their obvious benefits to medical science. In response, a new bill has proposed further penalties, including requiring repayment of any federal (NIH) funding, and forbidding further funding, for any group that does not publish its data fully. Another proposed bill will impose FDA requirements on all international trials used to support any application made to market a drug in the USA.

Still, there continues to be a lack of motivation on the part of some regulators to embed the need for registries into their processess. Just recently in Canada, a “web-based list” of regulator-approved trials was announced by Health Canada, which was heralded as out-of-touch and underwhelming – no mandatory registration, no enforcement, and no intent to make the results of public trials available. Compared to the FDA, it’s circa-1990’s transparency and policy thinking.

Access to research information, ideally patient-level data, is what’s necessary fairly evaluate risks and benefits of therapies. This allows other researchers to look at the evidence themselves, and potentially validate findings. It’s the ultimate in data transparency. To the credit of one member of Big Pharma, GlaxoSmithKline (GSK) has now promised this level of transparency. And last week, in lauding GSK’s announcement, Fiona Godlee, editor of the British Medical Journal announced the BMJ will require this data disclosure – from all researchers as of January, 2013, for any trial published in their journal:

And amid the plaudits, a moment of doubt. Surely what this apparently brave and benevolent action really serves to highlight is the rank absurdity of the current situation. Why aren’t all clinical trial data routinely available for independent scrutiny once a regulatory decision has been made? How have commercial companies been allowed to evaluate their own products and then to keep large and unknown amounts of the data secret even from the regulators? Why should it be up to the companies to decide who looks at the data and for what purpose? Why should it take legal action (as in the case of GlaxoSmithKline’s paroxetine and rosiglitazone), strong arm tactics by national licensing bodies (Pfizer’s reboxetine), and the exceptional tenacity of individual researchers and investigative journalists (Roche’s oseltamivir) to try to piece together the evidence on individual drugs?

Data secrecy has been the norm for decades. But perhaps it’s worth asking: Just who benefits from this data paternalism? After all, who needs that information more than those prescribing those treatments – and the patients already taking the drugs in question? GSK’s announcement was an important one for a pharmaceutical manufacturer, and they have just announced their support for the BMJ’s transparency requirements. Whether this will improve the trust that health professionals have in pharmaceutical-industry sponsored research remains to be seen.  To be clear, the entire industry isn’t on board yet. In time, a genuine demonstration of transparency could help the pharmaceutical industry recover from the damage it repeatedly self-inflicts on its own credibility.

I’m cautiously optimistic, with GSK’s announcement and the BMJ’s action, that there may be some momentum building. In September, the European Medicines Agency, the EU equivalent to the FDA, announced plans to proactively publish clinical trial data, and remarkably, allow access to full data sets. Ben Goldacre’s latest book, Bad Pharma, which provides a comprehensive yet accessible summary of the evidence and the consequences of a lack of transparency, has drawn widespread media attention in the UK and seems to be getting some traction in his calls for a fix. It will be interesting to see the North American reaction to his book, when it appears here in 2013.

Conclusion

We know evidence is missing or inaccessible. We know how to fix it. A lack of data transparency compromises the evidence base, prevents the best science-based care, and ultimately, does a disservice to patients.  The benefits of open data on how we deliver medicine are obvious. Yet the foot dragging and hand waving continues, from manufacturers, regulators, and other interest groups. It’s shameful, and it needs to change.

 

Shares

Author

  • Scott Gavura, BScPhm, MBA, RPh is committed to improving the way medications are used, and examining the profession of pharmacy through the lens of science-based medicine. He has a professional interest is improving the cost-effective use of drugs at the population level. Scott holds a Bachelor of Science in Pharmacy degree, and a Master of Business Administration degree from the University of Toronto, and has completed a Accredited Canadian Hospital Pharmacy Residency Program. His professional background includes pharmacy work in both community and hospital settings. He is a registered pharmacist in Ontario, Canada. Scott has no conflicts of interest to disclose. Disclaimer: All views expressed by Scott are his personal views alone, and do not represent the opinions of any current or former employers, or any organizations that he may be affiliated with. All information is provided for discussion purposes only, and should not be used as a replacement for consultation with a licensed and accredited health professional.

Posted by Scott Gavura

Scott Gavura, BScPhm, MBA, RPh is committed to improving the way medications are used, and examining the profession of pharmacy through the lens of science-based medicine. He has a professional interest is improving the cost-effective use of drugs at the population level. Scott holds a Bachelor of Science in Pharmacy degree, and a Master of Business Administration degree from the University of Toronto, and has completed a Accredited Canadian Hospital Pharmacy Residency Program. His professional background includes pharmacy work in both community and hospital settings. He is a registered pharmacist in Ontario, Canada. Scott has no conflicts of interest to disclose. Disclaimer: All views expressed by Scott are his personal views alone, and do not represent the opinions of any current or former employers, or any organizations that he may be affiliated with. All information is provided for discussion purposes only, and should not be used as a replacement for consultation with a licensed and accredited health professional.