Articles

The Tamiflu Spin

I will start, for those of you who are new to the blog, with two disclaimers.

First, I am an infectious disease doctor. It is a simple job: Me find bug. Me kill bug. Me go home. I spend all day taking care of patients with infections. My income comes from treating and preventing infections. So I must have some sort of bias, the main one being I like to do everything I can to cure my patients.

Second, in 25 years I have, to my knowledge, accepted one thing from a drug company. The Unisin (that’s how I spell it) rep, upon transfer from my hospital, sent me a Fleet enema with a Unisin sticker on it. I show it proudly to all who enter my office. I do not even eat the drug company pizza at conference, and I cannot begin to tell you painful that is.

As we leave (I hope) the H1N1 season and enter seasonal flu season, there has been a flurry of articles, originating in the British Medical Journal , questioning whether oseltamivir is effective in treating influenza. The specific complaint at issue is whether or not oseltamivir prevents secondary complications of influenza like hospitalization and pneumonia. Although you wouldn’t guess that was at issue from the reporting.  As always, there is what the data says, what the abstract says, what the conclusion says, and what other people say it says.  Reading the medical literature is all about blind men and elephants.

There is, evidently, going to be an investigation by the European Union Council of Europe  into whether or not the H1N1 pandemic was faked to sell more oseltamivir. Sigh.

Is oseltamivir effective against flu? Lets start with background. What do I want to know about an antiviral to help determine its efficacy?

I would like a mechanism of action that would prevent replication of the virus. I can’t kill a virus, as I can a bacteria, but at least I can stop if from reproducing.

I would like the antiviral to be effective in the test tube at physiologically achievable concentrations.

I would like the antiviral to be effective in an animal model.

I would like the antiviral to be effective clinically: on challenge studies if ethical and in the real world.

There is the virulence of the organism, and there can be strain to stain variation in the virulence. Some influenza strains are better at spreading or killing than other strains. Everyone seems to have forgotten that when H1N1 started in Mexico it apparently had a horrific mortality rate in hospitalized patients.

” By 60 days, 24 patients had died (41.4%; 95% confidence interval, 28.9%-55.0%).”

The worry was that there was going to be a repeat of the 1919 influenza pandemic. Fortunately the subsequent mortality rate was much less as it swept the world. H1N1 turned out to be a highly infectious, low virulence strain of flu. This time. But I would still fret that one-day there will be another repeat of the 1919 pandemic. The current bird flu has a 66% mortality rate, but is not spread human to human. Maybe someday it will gain that ability.  But at the beginning no one knew what mortality of H1N1 or bird flu was going to be and they prepared accordingly. Of course public health officials are always in a no-win situation. They are either going to over-prepare or under-prepare, and as a result at best look a fool and at worst look evil. Next time they may err on the side of under-preparing so they are not accused of faking a pandemic.

There is the hosts ability to respond to the infection. Just as there is variability in the virulence of the pathogen, there is variability in the hosts ability to control the infection. It is often the case that whether you live or die from an infection may be due to the immune system with which you are born. Pubmed ‘toll like receptor’ ‘polymorphism’ and ‘infection’ for further information that is beyond the scope of this entry. There may be an inherited predisposition to dying from influenza.

In the H1N1 pandemic, mild as it was overall, caused disproportionately relatively high mortality in children

“Between May and July 2009, a total of 251 children were hospitalized with 2009 H1N1 influenza. Rates of hospitalization were double those for seasonal influenza in 2008. Of the children who were hospitalized, 47 (19%) were admitted to an intensive care unit, 42 (17%) required mechanical ventilation, and 13 (5%) died. The overall rate of death was 1.1 per 100,000 children, as compared with 0.1 per 100,000 children for seasonal influenza in 2007. (No pediatric deaths associated with seasonal influenza were reported in 2008.) “

and pregnant women

“Data were reported for 94 pregnant women, 8 postpartum women, and 137 nonpregnant women of reproductive age who were hospitalized with 2009 H1N1 influenza… In all, 18 pregnant women and 4 postpartum women (total, 22 of 102 [22%]) required intensive care, and 8 (8%) died.”

Who gets the disease may be, depending on the organism, more important than the strain of infection.

There is how strong or powerful (meaningless terms to an ID doc, but part of the popular language of medicine) the antibiotic is. Antivirals are not that effective because they do not kill viruses, they only halt their replication. Given the prodigious replicative capacity of viruses, it is going to be impossible to shut down every virus from replicating with an antiviral. With HIV it took the combination of three different anti-retrovirals to (almost?) completely shut down viral replication. As those without immune systems prove with depressing regularity, no antibiotic will work for long if the host’s immune system cannot help control the infection.

Finally, there is promptness of therapy. For serious infections even a day in the delay of appropriate antibiotics can dramatically increase mortality. So the later you begin therapy, the less effect one should expect. This is one of the issues with a disease with a very rapid onset like influenza and why the challenge studies, where you give the medication right after exposing the subject to influenza, show better effect. Viral replication will often rapidly outstrip the available antiviral.

What would I expect from an influenza medication?

Prevent disease after exposure?
Decrease the severity of the infection?
Prevent death in all cases? Prevent death in the more severe cases? Decrease the odds of dying?
Decrease complications of the disease?
Some combination of the above?

All good endpoints. As a rule, I expect antivirals for to lessen the severity of disease if given early in the disease.

So what can we say for oseltamivir?

Does it have a mechanism of action that would interfere with viral replication? Yes. Oseltamivir blocks the activity of the viral neuraminidase enzyme, preventing new viral particles from being released by infected cells. Not a thrilling mechanism of action. The virus can multiply all it wants if it infects a cell, and since no drug is 100% effective, one would expect it would slow down the disease, not stop it.
Sometimes, as we shall see, that may be enough to make the difference between life and death.

Does oseltamivir work in the test tube? Yes. In animal models? Yeah. In human clinical trials? Yes.

It is in human clinical trials and the choice of efficacy endpoints where the current brouhaha starts.

LA times headline reads

“British medical journal questions efficacy of Tamiflu for swine flu — or any flu.”

The Atlantic , which has now supplanted the Natural News for the worst medical coverage, has a headline that reads

“The Truth About Tamiflu”

followed by the opening paragraphs.

“Two months ago, we pointed out in our story on flu in The Atlantic that the antiviral drug Tamiflu might not be as effective or safe as many patients, doctors, and governments think. The drug has been widely prescribed since the first cases of H1N1 flu surfaced last spring, and the U.S. government has spent more than $1.5 billion stockpiling it since 2005 as part of the nation’s pandemic preparedness plan.

Now it looks as if our concerns were correct, and the nation may have put more than a billion dollars into the medical equivalent of a mirage. This week, the British medical journal BMJ published a multi-part investigation that confirms that the scientific evidence just isn’t there to show that Tamiflu prevents serious complications, hospitalization, or death in people that have the flu. The BMJ goes further to suggest that Roche, the Swiss company that manufactures and markets Tamiflu, may have misled governments and physicians. In its defense, Roche stated that the company “has never concealed (or had the intention to conceal) any pertinent data.”

The medical equivalent of a mirage. Hmmm. And death? The Cochrane review, upon which the whole controversy revolves, does not comment of prevention of death nor does the BMJ feature or editorial on the topic. Into the second paragraph and already they are apparently making things up. I remember when I trusted the Atlantic. Interestingly, the Atlantic never quotes the results of the Cochran review. They suggest that the review demonstrates that oseltamivir is worthless.

Does it? Start with the methods section.

They looked for randomized, placebo controlled trials. Cochran gold.

“We excluded experimental influenza challenge studies as their generalizability and comparability with field studies is uncertain.”

Which is a shame, as the challenge studies always show efficacy. But they are not representative of the real world as they have the best case for treating with an antiviral: you can start the medication right when the disease may start.

They screened 1416 articles and ended up with 29 studies, 10 for effectiveness, the cream of the crop. 1416 articles is a lot of articles. I know they are not the crème de la crème, and is why I am not a fan on just relying on meta-analysis alone. That is a lot of ignored information. And their results?

Here is the conclusion in the abstract:

“Neuraminidase inhibitors have modest effectiveness against the symptoms of influenza in otherwise healthy adults. The drugs are effective postexposure against laboratory confirmed influenza, but this is a small component of influenza-like illness, so for this outcome neuraminidase inhibitors are not effective. Neuraminidase inhibitors might be regarded as optional for reducing the symptoms of seasonal influenza. Paucity of good data has undermined previous findings for oseltamivir’s prevention of complications from influenza. Independent randomized trials to resolve these uncertainties are needed.”

I am already confused.

“The drugs are effective postexposure against laboratory confirmed influenza, but this is a small component of influenza-like illness, so for this outcome neuraminidase inhibitors are not effective.”

They are effective but they are not. But the abstract suggests in normal people, oseltamivir has modest efficacy.

Later, there is the Take Home message box, which says

“WHAT IS ALREADY KNOWN ON THIS TOPIC

Neuraminidase inhibitors (especially oseltamivir) have become global public health drugs for influenza
They prevent symptoms and shorten the duration of illness by about one day if taken within 48 hours of the onset of symptoms
Toxicity and the effects on complications have been debated

WHAT THIS STUDY ADDS

Neuraminidase inhibitors reduce the symptoms of influenza modestly
Neuraminidase inhibitors reduce the chance of people exposed to influenza developing laboratory confirmed influenza but not influenza-like illness
Evidence for or against their benefit for preventing complications of influenza is insufficient
Evidence for or against serious adverse events is lacking, although oseltamivir causes nausea”

To me the take home message suggests some efficacy. Lets move on to the results and the discussion. Perhaps it will clarify the situation.

“The data suggest that neuraminidase inhibitors are effective at reducing the symptoms of influenza. The evidence is of modest benefit—reduction of illness by about one day. “

Which is what I would expect in a population of healthy, mostly young people with seasonal flu of low virulence.
Note the caveats. Young, healtyh people and influenza of low virulence.
Every season we have new strains of influenza and the ability of influenza to kill people varies from year to year.

“This benefit has been generalized to assume benefits for very ill people in hospital. This seems reasonable, although it is worth remembering that we have no data to support this, and it is unlikely that ethics committees would allow a trial of no treatment for people with influenza who have life threatening disease.”

Yeah. We will probably not have randomized controlled trials in the seriously ill. While we do not have randomized, controlled, clinical trials, we do have data to support the use of oseltamivir in ill patients. Not no data.

In the Mexican experience with H1N1 mentioned above,

“After adjusting for a reduced opportunity of patients dying early to receive neuraminidase inhibitors, neuraminidase inhibitor treatment (vs no treatment) was associated with improved survival (odds ratio, 8.5; 95% confidence interval, 1.2-62.8).”

And in pregnant women with H1N1 mentioned above

“As compared with early antiviral treatment (administered 2 days after symptom onset) in pregnant women, later treatment was associated with admission to an intensive care unit (ICU) or death (relative risk, 4.3).”

Or this experience with H1N1

“Of the 268 patients for whom data were available regarding the use of antiviral drugs, such therapy was initiated in 200 patients (75%) at a median of 3 days after the onset of illness. Data suggest that the use of antiviral drugs was beneficial in hospitalized patients, especially when such therapy was initiated early.”

And what about seasonal flu?

“No randomized trials of neuraminidase-inhibitor treatment of hospitalized influenza patients have been conducted. However, three observational studies suggest that oseltamivir treatment of hospitalized patients with seasonal influenza may reduce mortality. In one prospective Canadian study among hospitalized patients with seasonal influenza, (N=327; mean age, 77 years), in which 71% began oseltamivir treatment >48 hours after illness onset, oseltamivir treatment was significantly associated with a reduced risk of death (OR, 0.21; P=0.03) within 15 days after hospitalization as compared with untreated patients.7 In a subanalysis, in a Hong Kong study of hospitalized seasonal influenza patients (N=356; mean age, 70.2 years), oseltamivir treatment initiated within <96 hours after illness onset was independently associated with decreased mortality as compared with untreated patients (OR, 0.26; P=0.001).8 A retrospective chart review of hospitalized seasonal influenza patients in Thailand (N=445; mean age, 22 years), including 35% with radiographically confirmed pneumonia, reported that any oseltamivir treatment was significantly associated with survival (OR, 0.11; 95% CI, 0.04 – 0.30) as compared with untreated patients.”

In the seriously ill, people who are going to die from influenza, taking oseltamivir appears to decrease the odds of dying. Remember that the authors of the Atlantic article said olsetamivir didn’t prevent death. Maybe their firewall blocks access to Pubmed and Google. Or maybe they are no good at evaluating elephants.

So what would you do if you were in charge of public health policy and confronted with a new strain of influenza, be it H1N1 or bird flu, with a potentially catastrophic mortality rate? Ignore it? Or get as much oseltamivir and vaccine as you could lay your hands on?

Maybe not the highest quality studies, but we have a biologic mechanism, we have test tube and animal studies, we have challenge studies, we retrospective studies that show efficacy. We have a lot of data to support efficacy of antivirals in some patient populations.

The question is not whether oseltamivir is effective, but in what population the medication will be effective and for what strains of influenza. Certainly I am not going to withhold oseltamivir from a hospitalized pregnant female with presumptive H1N1 given an 8% mortality rate. Does a 45 yo with H1N1 or seasonal flu need oseltamivir? No. An 85 year old with multiple medical problems? Probably. Nuance. Subtlety. Understanding the breadth and depth of a topic no longer seems to be part of the Atlantic medical reporting. I originally typed  the last sentence as the Atlantis medical reporting. Pity I have to change it; it seemed much more accurate the first time.

The Cochran review states, “Because of the moderate effectiveness of neuraminidase inhibitors, we believe they should not be used in routine control of seasonal influenza.” The first half of statement is a fact, the second half is opinion.  The ‘we’ evidently being “Tom Jefferson, researcher, Mark Jones, statistician, Peter Doshi, doctoral student, Chris Del Mar, dean; coordinating editor of Cochrane Acute Respiratory Infections Group.” It is not a ‘we’ that I would use for deciding medical treatment. It is one thing to say that in healthy people from age 14 to 65 treatment is modestly effective, quite another to extrapolate that information to everyone, young and old, healthy and ill, pregnant or not.
If large populations are ill one less day during widespread disease, the positive effects of people returning to school and work can be enormous. Whether olsetamivir is worth the cost and the breeding of resistance requires a complicated cost-effective analysis that I will never understand.

In the Atlantic article they confuse the treatment of seasonal flu in studies in young(er) patients when there is some vaccine immunity and relatively low virulence with preparing for a new pandemic strain with no vaccine, and what appeared at first to be an fearsome mortality rate. In hindsight, now that we know the virulence of H1N1, we did not need to stockpile the oseltamivir for H1N1. If it had continued with the high mortality rate and, based on the studies mentioned above, we would be glad they stockpliled. And if avian flu becomes infectious while maintaining virulence, well, lets just say I am practicing not inhaling for a year.

Does oseltamivir prevent complications? Maybe. But that is not a primary endpoint in treating infections. You want to cure the patient, shorten the illness and prevent death. Oseltamivir does this. If it prevents complications, so much the better, but if it doesn’t I, and my patients, can live with that.

The Atlantic article partly concludes:

“There are a couple of take-home messages here. One is pretty obvious: Tamiflu may not be doing much good for patients with the flu who take it, and it might be causing harm.”

So disingenuous since this is not the conclusion of the Cochrane review. It depends on who you are treating and what strain. It seems that understanding nuance is not part of the Atlantic’s oeuvre. Or even reading the primary sources.

The other part of the Tamiflu kerfuffle is more difficult to discuss because it concerns information we do not have.

As the Cochran review alluded to the fact that they wanted to get original data about the ability of olsetamavir to prevent secondary complications of influenza but the company would not release the information.  Note. Secondary complications. Pneumonia increases the risk of heart attack.  Treating the pneumonia with penicillin does not prevent the heart attack, but that does not make the treatment of pneumonia less beneficial.  Only living people can get a heart attack anyway.

“Attempts to deal with these shortcomings were unsuccessful: although three of five first authors of studies on oseltamivir treatment responded to our contact, none had original data and referred us to the manufacturer (Roche), which was not able to unconditionally provide the information as quickly as we needed it to update this review.”

But the Atlantic was more, well, descriptive

“The dog ate my homework
But when the Cochrane team, led by Chris Del Mar, from Bond University in Australia, re-examined the studies they had previously used in 2006, they found some discrepancies. It turned out that only two of the ten studies had ever been published in medical journals, and those two showed the drug had very little effect on complications compared to a dummy pill, or placebo. So the Cochrane reviewers decided to look at the data for themselves.

First they went to the lead authors of the published studies—the researchers who were supposed to have access to all of the data. One author said he had lost track of the data when he moved offices and the files appeared to have been discarded. The other said he’d never actually seen the data himself, and directed the Cochrane team to go directly to the company.

Four months and multiple requests later, the Cochrane researchers had a hodgepodge of data from the company, including two studies that showed the drug was ineffective, but which the company had never published. Roche also provided data from a third study, which involved 1,447 adults and adolescents aged 13-80, the largest study of the drug ever conducted. Yet the company never published that one either. (A summary of this and other studies is available at www.roche-trials.com). But with only partial data, the Cochrane team couldn’t even figure out what the study had been intended to measure.”

That is a problem.

One of things I have learned in blogging and podcasting is how limited and unimpressive meta-analysis and structured reviews are. They do pool the best studies. But often it seems that the process of choosing the studies, much important and relevant information is not considered. Most of infectious diseases is not based on randomized, placebo controlled trials and does not need to be.  After reading the influenza Cochrane reviews I am starting to  understand the point behind another BMJ meta-analysis.  One would have to be a wackaloon at the most bizarre fringes of medicine to demand that I treat endocarditis or meningitis on the basis of such a trial. Other diseases? Not so much.

There is a bit of irony in the whole process. The first Cochrane review of oseltamivir that suggested benefit from oseltamivir was published in 2005.
One would have thought they knew what they were doing since the lead author is touted by the Atlantic as the master of the influenza literature.
Subsequently, a Japanese physician wanted to know more about the details of effect of oseltamivir on complications. Part of the data to demonstrate that oseltamivir is effective is from an article entitled “Impact of oseltamivir treatment on influenza-related lower respiratory tract complications and hospitalizations.” which was a summary of 10 unpublished trials done by the makers of olsetamivir. The review suggested that oseltamivir halved hospitalization.

The Cochrane review, which had used the published data in an earlier meta analysis, decided that they needed to be more rigorous this time. Why they didn’t bother, with their alleged mastery of the literature, to be this rigorous the first time I am uncertain, and it casts a sliver of doubt in my mind as to the rigorousness of the influenza vaccine meta-analysis, so touted in another Atlantic article discussed on this blog.

Turns out Roche tried their best to not release all the data. First they asked for the Cochrane reviewers to sign a non-disclosure contract, then said they were giving the information to another group instead. Why the majority of the studies were unpublished were explained by  lame excuses.

“It begged the question: why were so many of the trials still unpublished and not easily accessible?
When the BMJ expressed concern to Roche that eight of 10 treatment trials were unpublished and therefore unverifiable by the general medical community, Roche said that the additional studies “provided little new information and would therefore be unlikely to be accepted for publication by most reputable journals.”
They also added that now it is standard practice for Roche to publish all its clinical trial data, but this was not standard policy within Roche or elsewhere within the industry seven to 10 years ago. “At the time, it was considered that the studies that were published (2 abstracts and 2 full manuscripts) reflected accurately the benefits of the drug,” they said.”

Good questions. Given the history of pharmaceutical companies hiding and obfuscating important data about their drugs, it would be nice if they released the data. Most drug studies are funded by pharmaceutical companies. Does that invalidate the study? No. A drug company study can be just as well done as one funded by an NIH grant. That outcomes will be slightly tilted in favor of the drug if it is funded by the company as compared to non drug company funded studies is recognized. Some studies are well done, some studies are little better than infomercials. In the end you have to read the literature, not the reviews. So I can’t comment on the validity of olsetamivir to prevent complications.  Neither can the Cochrane review.  Just the Atlantic, who concludes that it is all a mirage.  So much sound and fury, signifying nothing.

Then, for each study, there is what the results of the study are, there is the spin put in the discussion, and finally the spin that the drug company reps put on the study when they show it to a doctor.

Kind of like a Cochrane review, huh?

The review says Tamiflu is modestly effective.
Yep.
The conclusion says “Because of the moderate effectiveness of neuraminidase inhibitors, we believe they should not be used in routine control of seasonal influenza.”
Fact, then Spin.
And the drug rep, er, I mean Atlantic says it is nearly worthless.
Lots of unjustified spin.
Remember the conclusion of the review, “Evidence for or against their benefit for preventing complications of influenza is insufficient.”
Pretty mild. The BMJ has an nice discussion on the information, as such as can be determined, from the studies on prevention. By themselves they showed no efficacy but did (may be) when combined. But with no access to the data, there remains uncertainty.

Do drug companies hide and spin important information? Do drug companies inflict bias into studies? Do drug companies influence doctors prescribing habits? Is homeopathy nothing but water?
Yes. It is why you need to consider the entire medical literature.

But to quote the ever helpful Dr. Hill

“All scientific work is incomplete, whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone the action that it appears to demand at a given time.”

Like evolution, there are multiple lines of evidence to demonstrate oseltamivir efficacy against influenza. When it should be used to get the most bang for the buck depends on the strain of influenza and the patient being treated. And when (not if) we get a pandemic influenza that is both highly infections and highly virulent, I hope we will have a drug like olsetamivir. It will not be a panacea, like penicillin for syphilis. But it will keep some people alive who would otherwise die. The Atlantic concludes

“The more important issue, however, involves the need for trust in science and medicine. Governments, public health agencies, and international bodies such as the World Health Organization, have all based their decisions to recommend and stockpile Tamiflu on studies that had seemed independent, but had in fact been funded by the company and were authored almost entirely by Roche employees or paid academic consultants. So did the Cochrane Collaboration, at least in its earlier assessments of Tamiflu. Millions of flu patients have taken the drug as a result.”

We also need the 4th estate to write and publish quality articles on science and medicine and not go for the easy story of good and bad and obscure importance detail in inflammatory prose. It would be nice if the 4th estate took the time to, oh, maybe actually understand the nuances involved in influenza prevention and treatment. The Atlantic is 0 for 2 with their reporting on influenza. At least I get a topic to write about, and, like their spiritual colleagues Dr. Mercola and The Natural News, the worse the reporting, the better the blog.

Posted in: Pharmaceuticals, Politics and Regulation, Public Health, Science and the Media

Leave a Comment (36) ↓

36 thoughts on “The Tamiflu Spin

  1. twaza says:

    Excellent review.

    Thanks very much.

  2. windriven says:

    The American education system seems not to adequately prepare people to analyze nuanced data. Rather, people come to expect certainties: a bullet is either a silver bullet or it is no bullet at all.

    Beneath the turbid waters of debated effectiveness lies a question more easily answered: is tamiflu better than its alternative?

    At least until scramiflu is developed the answer is ‘yes,’ especially when considered in light of the relatively insignificant reported side-effects.

  3. Zoe237 says:

    “After adjusting for a reduced opportunity of patients dying early to receive neuraminidase inhibitors, neuraminidase inhibitor treatment (vs no treatment) was associated with improved survival (odds ratio, 8.5; 95% confidence interval, 1.2-62.8).”

    What’s with the huge confidence intervals? This really seems like a big case of “we don’t know.”

    “The conclusion says “Because of the moderate effectiveness of neuraminidase inhibitors, we believe they should not be used in routine control of seasonal influenza.”

    I take this to mean that every person who has the flu doesn’t need tamiflu. Would you disagree with this? In the cases in the past in which I’ve gotten the flu, I never even go to the doctor. Are there are side effects to Tamiflu? It seems prudent to me to restrict its use to high risk people. If you’re admitted to the hospital, I guess that’d make you high risk.

    The downside of raising the alarm bells wrt public health scares is that people stop believing you after awhile… the little boy who cried wolf. I’m assuming that’s why 40-50% of doctors refused the flu shot… they’d btdt for the past 30 years with one scare after another. At least pork prices went down!

  4. AAPrescott says:

    I found this article very interesting and there is no doubt that the headlines in medical reportage often give a very miss-leading impression (and even opposite to the content in some cases). However, I also find a degree of selectivity, and assumptions, in this critical review. And yes my irony meter went off a few times reading this and other articles on the Website, particularly where I read where you were not keen on the results of some of the studies you found the studies limited in some way – brought a smile to my face.

  5. windriven says:

    @ Zoe237

    “Are there are side effects to Tamiflu?”

    I did a Pubmed search this morning and found only nausea and abdominal pain.

    I don’t think Dr. Crislip was arguing for routine use of Tamiflu. Rather, he seems to argue for its use in those at particular risk.

    Where did you get the statistic of 40-50% of physicians eschewing the flu vaccine? Was this the trivalent vaccine or H1N1? If accurate, that is a truly disturbing statistic.

    Your ‘boy who cried wolf’ metaphor is right on the money. Unfortunately, viral pandemics are very difficult to predict and it seems a certainty that people will fall to complacency regarding regular vaccination – much to our collective peril at some point.

    I lived in New Orleans for 20 years and each year watched local weatherpersons standing on highway overpasses gleefully reporting that ‘when the Big One comes” water will be up to here (some arbitrarily high point above ground level). New Orleanians heard the same message year in and year out and they and their elected officials became complacent. And the price of their complacency got paid in full.

  6. Windriven: Great points.

    I’m curious, too, to learn more about the 40-50% of doctors not getting the flu shot – many claims were made about local doctors and nurses not getting it in our community when the facts disagreed.

    Your comparison to New Orleans is a great example – it’s just a shame that people don’t understand the risk/reward. The “expense” of being wrong when “the big one” doesn’t come is far less than that of being wrong when “the big one” does come.

    Nevermind the fact that having a bigger uptake of the H1N1 shot (at least in our area, I haven’t seen the data for the US) in children would also, logically, limit the rate of transmission and may have actually saved a lot more lives – something we’ll probably never know.

  7. OrganicDen says:

    The problem is that the media created a scare-mongering frenzy that clearly was overblown. The southern hemisphere went through the flu season without the huge problems that were predicted…
    The investigation by the European Union has its merits …

    Tamiflu seems to be a victim in this situation, however – it is better to have a weak drug than none at all.

  8. vegaz says:

    “There is, evidently, going to be an investigation by the European Union into whether or not the H1N1 pandemic was faked to sell more oseltamivir . Sigh.”

    Please pretty please:

    The Council of Europe has nothing to do with the EU!

  9. windriven says:

    @sarniaskeptic

    I don’t want to speak for Zoe but I believe she may have assigned the vaccination rate for HCWs to doctors. There are citations on Pubmed that support that low rate for a cross section of health care workers. But I have been unable to find credible numbers for physicians be they high, low or middling.

  10. Zoe237 says:

    That may very well be, that the 40% was for all HCW and I am wrong. I’ll try to find my original source.

  11. Zoe237 says:

    http://stanford.wellsphere.com/brain-health-article/will-healthcare-workers-refuse-the-swine-flu-vaccine/789277

    Yes, it’s all hcw, but I don’t think it changes my argument that much. I think the BMJ article was a survey of hcw in Hong Kong of H1N1, but another article says that about 35% of HCW in the U.S. choose to vaccinate for the regular flu annually.

    Of course, I believe in personal choice, but HCW are directly putting others at risk whom they are supposed to be helping.

  12. jimpurdy says:

    I seem to be very susceptible to what I call the Friday Afternoon Infection Syndrome.

    I’m usually just fine all week long, but then I often start to get a really nasty infection on a Friday afternoon, when it’s too late to see my primary care doctor. By the time I crawl into the doctor’s office on Monday, I’m half dead.

    Those nasty little bugs must have evolved some kind of internal calendar that lets them attack at the start of a weekend.

    The 50 Best Health Blogs

  13. BillyJoe says:

    Mark Crislip.

    Maybe I’m just confused by your writing style or perhaps I don’t have sufficient background knowledge, but I have failed to get the meaning of your article in many places:

    “Which is a shame, as the challenge studies always show efficacy. But they are not representative of the real world as they have the best case for treating with an antiviral: you can start the medication right when the disease may start.”

    If they are “not representative of the real world”, why is it “a shame” that challenge studies are not included?

    “They screened 1416 articles and ended up with 29 studies, 10 for effectiveness, the cream of the crop. 1416 articles is a lot of articles.”

    A very confusing paragraph for me: Does “cream of the crop” refer to the 29 studies or the 10? When you say “10 for effectiveness”, what were the other 19 trials testing for? Complications? Side-effects?

    “I know they are not the crème de la crème, and is why I am not a fan on just relying on meta-analysis alone. That is a lot of ignored information.”

    You said that the 29 studies are the “cream of the crop” so what does “they are not the crème de la crème” refer to?
    And why should you not ignore the 1387 studies that do not meet the criteria of methodolgical robustness and are therefore eliminated from the meta-analysis. Does that not increase the reliability of your results?

    “The Cochran review states, “Because of the moderate effectiveness of neuraminidase inhibitors, we believe they should not be used in routine control of seasonal influenza.” The first half of statement is a fact, the second half is opinion…It is not a ‘we’ that I would use for deciding medical treatment. It is one thing to say that in healthy people from age 14 to 65 treatment is modestly effective, quite another to extrapolate that information to everyone, young and old, healthy and ill, pregnant or not.”

    When the Cochrane reviewers states that they believe they should not be used in “routine” control of seasonal influenza, the use of the word “routiine” means they are not referring to their use in *special* cases – like the ones you mentioned, for example. It seems to me that are not actually disagreeing with you.

    “Whether olsetamivir is worth the cost and the breeding of resistance requires a complicated cost-effective analysis that I will never understand.”

    Do you mean that it is a failing on your part, or do you mean that a cost/benefit analysis is too meaningless to be of any value?

    “One of things I have learned in blogging and podcasting is how limited and unimpressive meta-analysis and structured reviews are. They do pool the best studies. But often it seems that the process of choosing the studies, much important and relevant information is not considered.”

    I just don’t understand this. They “pool the best studies” but they “leave out important and relevant information”. What can that mean? Are you just disagreeing about the criteria for choosing the “best studies”, or do you think that studies with methodological flaws can still teach us something. If so, which ones and what can they teach us?

    “Most of infectious diseases is not based on randomized, placebo controlled trials and does not need to be.”

    Surely when Oseltemivir was first developed RCTs would have been essential to establish efficacy. The fact that the company hid 8 of 10 studies should surely raise a red flag.

    “It is why you need to consider the entire medical literature.”

    But surely a meta-analysis/systematic review does just that – it reviews all the trials and appropriately rejects the those with methodological flaws sufficient to invalidate them. If you think their criteria are too stringent and that they exclude worthwhile, you need to explain what your criteria would be and why you disagree with their criteria.

    regards,
    BillyJoe

  14. BillyJoe says:

    jimpurdy,

    “I’m usually just fine all week long, but then I often start to get a really nasty infection on a Friday afternoon, when it’s too late to see my primary care doctor. By the time I crawl into the doctor’s office on Monday, I’m half dead.”

    Same.
    Except that, by Monday, I haver usually recovered sufficiently to go back to work, having ruined my week-end :(

  15. BillyJoe says:

    Mark,

    I’ve had time to re-read the article and I’ve read some of the references to try to get some clarification, but all I have is more questions:

    “We will probably not have randomized controlled trials in the seriously ill.”

    This is the problem isn’t it. Once the drug has come into widespread use and they “seem” to be effective, it becomes almost impossible toconduct a RCT, especially on seriously ill patients. Hence the importance of performing RCTs *before* they become widely used.

    “While we do not have randomized, controlled, clinical trials, we do have data to support the use of oseltamivir in ill patients. Not no data.”

    The question is how reliable is that data.
    I am reminded of the use of corticosteroids in premature babies to prevent blindness which seemed to be very useful until a RCT was done that showed it was actually harmful.

    “The Cochrane review, which had used the published data in an earlier meta analysis, decided that they needed to be more rigorous this time. Why they didn’t bother, with their alleged mastery of the literature, to be this rigorous the first time I am uncertain”

    Perhaps they learn from their mistakes. Perhaps, with experience, they improve their inclusion criteria to improve the reliability of their results.

    “After reading the influenza Cochrane reviews I am starting to understand the point behind another BMJ meta-analysis.”

    Yet in all the situations they mention in support of their proposition (see link below), there is prior evidence of efficacy from RCTs. I don’t get it.
    http://www.bmj.com/cgi/content/full/333/7570/701
    (Besides, it was not always obvious that parachutes would make much difference and probably a lot of trial and error before a reliable parachute was developed. The first parachute was probably almost completely useless.)

    BJ

  16. Mark Crislip says:

    “Which is a shame, as the challenge studies always show efficacy. But they are not representative of the real world as they have the best case for treating with an antiviral: you can start the medication right when the disease may start.”
    If they are “not representative of the real world”, why is it “a shame” that challenge studies are not included?

    If the question is efficacy, the challnge studies are information in support

    “They screened 1416 articles and ended up with 29 studies, 10 for effectiveness, the cream of the crop. 1416 articles is a lot of articles.”
    A very confusing paragraph for me: Does “cream of the crop” refer to the 29 studies or the 10? When you say “10 for effectiveness”, what were the other 19 trials testing for? Complications? Side-effects?

    yes, complicaitons and side effects

    “I know they are not the crème de la crème, and is why I am not a fan on just relying on meta-analysis alone. That is a lot of ignored information.”
    You said that the 29 studies are the “cream of the crop” so what does “they are not the crème de la crème” refer to?
    And why should you not ignore the 1387 studies that do not meet the criteria of methodolgical robustness and are therefore eliminated from the meta-analysis. Does that not increase the reliability of your results?

    The point is that there is more data to consider, as a clinician, to the results of a meta, which are wrong 35% of the time

    “The Cochran review states, “Because of the moderate effectiveness of neuraminidase inhibitors, we believe they should not be used in routine control of seasonal influenza.” The first half of statement is a fact, the second half is opinion…It is not a ‘we’ that I would use for deciding medical treatment. It is one thing to say that in healthy people from age 14 to 65 treatment is modestly effective, quite another to extrapolate that information to everyone, young and old, healthy and ill, pregnant or not.”
    When the Cochrane reviewers states that they believe they should not be used in “routine” control of seasonal influenza, the use of the word “routiine” means they are not referring to their use in *special* cases – like the ones you mentioned, for example. It seems to me that are not actually disagreeing with you.

    what is routine? A vague recommendation by 4 people.

    “Whether olsetamivir is worth the cost and the breeding of resistance requires a complicated cost-effective analysis that I will never understand.”
    Do you mean that it is a failing on your part, or do you mean that a cost/benefit analysis is too meaningless to be of any value?

    a failing on my part, I do not understand cost benefit analysis methodologies enough to be able to critique them

    “One of things I have learned in blogging and podcasting is how limited and unimpressive meta-analysis and structured reviews are. They do pool the best studies. But often it seems that the process of choosing the studies, much important and relevant information is not considered.”

    I just don’t understand this. They “pool the best studies” but they “leave out important and relevant information”. What can that mean? Are you just disagreeing about the criteria for choosing the “best studies”, or do you think that studies with methodological flaws can still teach us something. If so, which ones and what can they teach us?

    the point is that meta’s are limited in their applicability.

    “Most of infectious diseases is not based on randomized, placebo controlled trials and does not need to be.”
    Surely when Oseltemivir was first developed RCTs would have been essential to establish efficacy. The fact that the company hid 8 of 10 studies should surely raise a red flag.

    It is nice to have RCT, and drug companies are weasels

    “It is why you need to consider the entire medical literature.”
    But surely a meta-analysis/systematic review does just that – it reviews all the trials and appropriately rejects the those with methodological flaws sufficient to invalidate them. If you think their criteria are too stringent and that they exclude worthwhile, you need to explain what your criteria would be and why you disagree with their criteria.

    meta are a part of the argument, see the post on hills criteria.

    “We will probably not have randomized controlled trials in the seriously ill.”
    This is the problem isn’t it. Once the drug has come into widespread use and they “seem” to be effective, it becomes almost impossible to conduct a RCT, especially on seriously ill patients. Hence the importance of performing RCTs *before* they become widely used.

    no argument there

    “While we do not have randomized, controlled, clinical trials, we do have data to support the use of oseltamivir in ill patients. Not no data.”

    The question is how reliable is that data.
    I am reminded of the use of corticosteroids in premature babies to prevent blindness which seemed to be very useful until a RCT was done that showed it was actually harmful.

    or steroids in sepsis as another example where trials keep giving contradictory results

    Again see the Hills post

    “The Cochrane review, which had used the published data in an earlier meta analysis, decided that they needed to be more rigorous this time. Why they didn’t bother, with their alleged mastery of the literature, to be this rigorous the first time I am uncertain”
    Perhaps they learn from their mistakes. Perhaps, with experience, they improve their inclusion criteria to improve the reliability of their results.

    perhaps. but I brings into question the result of their other meta’s

    meta’s are seen by many as a be all end all, esp the cochrane, reviews. They are just another, potentially flawed, mechanism to get at the ever changing ‘truth’ of how best to treat patients.

  17. wales says:

    Vegaz inaccurately stated that the Council of Europe (COE) has nothing to do with the European Union (EU).

    The COE was founded in 1949 by a group of individuals who also founded the European Coal and Steel Community (ESCS) and the European Economic Community (EEC), precursors to the EU. The COE has 47 members and was founded in 1949. The EU has 27 members and was formally established in 1993. The COE and EU complement each other and collaborate on a number of joint programs. Among other things, the organizations share the same flag, originated by COE.

    What’s the relationship of COE to pharmaceuticals and vaccines? The European Directorate for the Quality of Medicines and Healthcare (EDQM) and the European Pharmacopoeia operate under the auspices of the Council of Europe.

    http://www.edqm.eu/en/Homepage-628.html
    http://www.coe.int/

  18. BillyJoe says:

    Mark Crislip.

    “If the question is efficacy, the challenge studies are information in support”

    The drug is going to be used in the general population, so surely we need to see how it performs out there. It could turn out that the drug is effective only if given immediately after exposure which would make it practically useless.

    “The point is that there is more data to consider, as a clinician, to the results of a meta, which are wrong 35% of the time”

    This must mean there is a way to assess a meta study that is more reliable than the meta study itself. I suppose I was asking what that method is.

    “what is routine?”

    Something that is done “routinely”, meaning not just for special
    groups. At least that is my understanding. What is the “accepted” definition of “routine”.

    “a failing on my part, I do not understand cost benefit analysis methodologies enough to be able to critique them”

    Nevertheless a very important consideration in the context of recommending or not recommending the stockpiling of the drug (I know your article was about effectiveness, not whether it should be stockpiled)

    “the point is that meta’s are limited in their applicability.”

    I guess I was asking: in what way are meta’s limited in their applicability and, if so, what other method outperforms them?

    “It is nice to have RCT, and drug companies are weasels”

    Yes, 8 out of the10 studies were never released even on repeated request over a period of time by several different professionals. Yet they are supposed to show much better results than the two that were published – they refuse to publish the trials showing the better results! That’s hard to believe. And it was these results on which the decision to stockpile the drug were made.

    “meta are a part of the argument, see the post on hills criteria.”

    I see it is one of yours from a couple of weeks back – must have missed it by a couple of days! I’ll have a look as soon as I get the chance.

    “meta’s are seen by many as a be all end all, esp the cochrane, reviews. They are just another, potentially flawed, mechanism to get at the ever changing ‘truth’ of how best to treat patients.”

    Okay I need to read that post, but as a first pass:

    My understanding is that RCTs were originally set up because it was realised that personal experience was far too unreliable a method on which to base decisions about investigations and treatments. Unfortunately, it seems that most who do RCTs actually do not know how to do them properly and, as a result most RCTs are too methodologicaaly flawed to provide the answer to the question being asked of them. Hence the meta-analysis (or systematic reviews). Obviously there could just as easily be flawed systematic reviews. For example they might include methodologically flawed RCTs, or they might have too rigid exclusion criteria resulting in the exclusion of trials that are sound. I suppose the result would also be affected by the “Bottom Drawer” effect.

    However, it seems to me that the solution is to improve the quality of the systematic reviews and to *register* clinical trials (not sure if that is happening universally as yet).

    Another solution that I have posted elsewhere, but that has never gotten any response from fellow posters, is that when trials are registered (a trial that is not registered doesn’t get published) they should also be *licensed* by an expert panel. The expert panels job would be twofold: to reject any trials with methodological flaws and to educate the researchers on the proper methodology. As with registration, any trial that is not *licensed* does not get published. Same for systematic reviews.
    The total number of trials of tamiflu is over 1400, but only about 30 were fit to be included in the systematic review. That is an enormous waste. Surely the registration and licensing of clincal trials and systematic reviews would be a far better use of limited resources.

    regards,
    BillyJoe

  19. BillyJoe says:

    Mark Crislip,

    I have read your article on Hills Criteria, but I have not yet read the comments so you may have already answered the following questions.

    Who can argue with those criteria (except that “Plausibility” and “Coherence” seem to be the same thing).

    However, regarding the question of the reliability of meta-analyses, you referenced the following study:
    http://nejm.highwire.org/cgi/content/full/337/8/536#F1
    This study compares large RCTs to the results of previous Meta-analyses.

    I have to say I get quite a different message from the one you got.
    If you look at this chart:

    http://nejm.highwire.org/cgi/content/full/337/8/536/F1

    The number of patients in the *large RCTs* mostly outnumber the total number of patients in the *meta-analyses*. And, since a meta-analysis is effectively a large RCT, they are effectively comparing larger *large RCTs* with smaller *large RCTs* (the smaller *meta-analyses*). All things being equal, you would EXPECT the *large RCTs* to be more accurate than the smaller *meta-analysis* (or at least to show significant differences from them)

    Also (and I haven’t read it in detail), I would assume that the *large RCTs* were chosen for methodological soundness. Am I right? Whereas the preceding meta-analyses would very likely contain small RCTs including some with at least some methodological flaws (if only because they were performed longer ago when there was less scrutiny).

    In summary, what I get from this is that RCTs need to be large and methodologically sound. If this is the case, the subsequent meta-analyses the are done on them will inevitably be similarly methodologically sound, but larger by at least an oder of magnitude and hence more reliable by virtue of having more reliably eliminated the effect of a chance result.

    In other words, the take home message for me is that we should be doing methodologically sound large RCTs.
    If we do, then the Meta-analyses we subsequently do on them will be even more reliable than the individual large RCTs.

    Hence the suggestion in my last post of registering and licensing all clinical trials, the penalty of not doing so being the inability to get your trial published.
    Maybe I’m living in a fantasy world though.

    regards,
    BillyJoe

  20. Lawrence C. says:

    Thank you for explaining this “controversy” in terms designed to educate instead of merely entertain. I hope someone at the Atlantic reads this post and decides to make you a consulting editor for medical articles. Obviously their need is great.

  21. BillyJoe says:

    Hmmm, what I have written must have been so incredibly ignorant as to be not worthy of a reply. :(

  22. antipodean says:

    BillyJoe

    You are largely correct except they are talking about meta-analysis before the large RCTs being reliable estimates or not. Mega trials are expensive to run afterall- and you have to wait many years for the outcome. And yes the subsequent meta-analyses of the mega trials wil be even more accurate. But again you have to spend mega bucks and wait many many years to know this.

    Secondly clinical trial registration became largely compulsory years ago. There are now a number of these world-wide. You’re not living in a fantasy world but you could google some of this stuff and rest assured that what you’re thinking is already the gold standard of behaviour in trials.

  23. BillyJoe says:

    antipodean,

    “You are largely correct except they are talking about meta-analysis before the large RCTs being reliable estimates or not. ”

    Except that Mark Crislip is using that as an argument against meta-analysis. Of course meta-analyses are not going to be reliable if they are based on flawed RCTs. The solution is to perform sound RCTs. Those, and the meta-analyses based on them, are really the only reliable way to obtain evidence for the effectiveness of medical treatments.
    All the other Hills criteria are merely a way of assessing “prior probability”and whether or not a treatment is worth the time, manpower, and expense of subjecting to a clinical trial.

    “Mega trials are expensive to run afterall- and you have to wait many years for the outcome. And yes the subsequent meta-analyses of the mega trials wil be even more accurate. But again you have to spend mega bucks and wait many many years to know this.”

    Yes, I understand: The bigger the trial the more reliable the result and the longer it takes to run that trial. On the other hand, a methodologically sound trial is no more expensive to run than a flawed trial. And a meta-nanlysis of a series of small but sound clinical trials would be equivalent to one large RCT.

    “Secondly clinical trial registration became largely compulsory years ago. There are now a number of these world-wide. You’re not living in a fantasy world but you could google some of this stuff and rest assured that what you’re thinking is already the gold standard of behaviour in trials.”

    I was referring to the *licencing* of trials by a panel of experts who would not grant the license unless the submitted trial protocol was methodologically sound (and, if the trial isn’t licenced it doesn’t get published). They would also have an educational role. This has never been done as far as I know.

  24. anandamide says:

    Thank you for a challenginf post and intriguing discussion. I’m not sure I agree with all your reasoning and assumptions, which of course makes it all the more interesting. BillyJoe, however, is doing a fine job of airing my reservations.

    Just one point which is sticking out like a sore thumb for me, re: the reliability of Cochrane reviews. You say of changing their conclusions;

    ‘perhaps. but I brings into question the result of their other meta’s’

    This sounds very close to arguments along the lines of ‘science has been wrong before’.

  25. antipodean says:

    BillyJoe

    In order to get the money in the first place all such trials undergo extensive consultation and review. Much the same as anything humans do this sometimes doesn’t work out as well as it should. It’s not like we want to waste years of our lives on screwing things up.

    Meta-analysis is always going to be garbage in garbage out (GIGO).

    So basically your telling clinical triallists to do a better job. We’re trying- this stuff is actually difficult and sometimes the science can be less than ideal because of the logistics and reality intruding. We’re not running mice around a cage. It’s real people, in the real world, running wild, and it gets messy.

    The mindset changes when you’re like Dr Crislip. You have a patient in front of you and you don’t necessarily have the perfect science to back up your treatment decisions. You have to make the best of what you’ve got, for that patient, at that time-point. This is the art and science of medicine and it’s not necessarily a ‘bad thing’.

    In the meantime guys like me keep trying to provide guys like Mark with better info.

  26. BillyJoe says:

    antipodean,

    “In order to get the money in the first place all such trials undergo extensive consultation and review. Much the same as anything humans do this sometimes doesn’t work out as well as it should. It’s not like we want to waste years of our lives on screwing things up.”

    Out of 1416 trials only 29 were found to suitable for inclusion in a meta-analysis. I’m not sure how much “consultation and review” went into those 1416 trials, but it was definitely an enormous waste in anyone’s language.

    “So basically your telling clinical triallists to do a better job. We’re trying- this stuff is actually difficult and sometimes the science can be less than ideal because of the logistics and reality intruding.”

    According to Ben Goldacre, by any measure, it does not cost any more to do a trial properly. Do you disagree? It should be pretty straight forward for medical treatments surely.

    “The mindset changes when you’re like Dr Crislip. You have a patient in front of you and you don’t necessarily have the perfect science to back up your treatment decisions. You have to make the best of what you’ve got, for that patient, at that time-point. This is the art and science of medicine and it’s not necessarily a ‘bad thing’.”

    That may be the case, but that is not the way he has portrayed meta-analysis. He simply stated that they are one of many ways to assess rthe uselfullness of a therapy, and that 40% have been found to be unreliable. I had to go to the source to discover that these were metanalyses of (most likely) methodologically flawed RCTs. I still don’t know whether he still distusts meta-analyses as a concept or whether he accepts that those he referred to were flawed and hence not a true reflection of the concept of meta-analysis.

    In fact, meta-analysis stands at the pinacle of SBM with RCTs close behind. But you have to hit the ball properly otherwise it will end up in the bunker instead of on the green. All the other criteria he mentions are merely assessments of “prior probability” in my opinion.

    regards,
    BillyJoe

  27. emcsun says:

    Tamiflu sucks. Just take the herb, Star Anise. Case closed.

  28. antipodean says:

    Dear BillyJoe

    There were not 1416 trials. 1416 articles were identified by the systematic search. The point of these searches is to make them sensitive enough to pick up every one of the actual trials. What they mostly pick up is junk that’s not relevant and you have to sift out the good ones. When you do this stuff in the real world you get lots of editorials, case reports, news reports, reviews, opinion pieces, cohorts etc etc. You then pull the relevant RCTs out and meta-analyse them.

    “According to Ben Goldacre, by any measure, it does not cost any more to do a trial properly. Do you disagree?”
    A crappy trial is often less expensive than a really good trial. So Ben is sort of right. As much as I respect Ben’s opinions I’m not sure Ben has ever run a trial. But once again it’s not like these are getting screwed up on purpose. Hindsight being 20/20 and all that…

    “It should be pretty straight forward for medical treatments surely. ”
    Like I said previously. No, it’s not. The real world is messy. This stuff is actually hard to do. I imagine it’s much harder to do in an infectious disease setting given natural history and diagnostic issues.

    And I think Mark and you are actually agreeing with one another. As with all analytic techniques meta-analysis is GIGO. You have to be able to critically evaluate your evidence when you are applying it to an individual patient right now. If the MA is based on crappy RCTs or isn’t quite relevant to your patient then it is not at the top of the evidence hierarchy and you’ll need to look for other flavours of evidence.

    And what’s wrong with using prior probability in medicine?

  29. BillyJoe says:

    You said there were not 1416 trials and that most of it is just junk. I accept that may be the case, but then why does Mark say:

    “They screened 1416 articles and ended up with 29 studies, 10 for effectiveness, the cream of the crop. 1416 articles is a lot of articles. I know they are not the crème de la crème, and is why I am not a fan on just relying on meta-analysis alone. That is a lot of ignored information.”

    Why would you not ignore junk?
    (And his third sentence confuses me no end! In fact that whole paragraph confuses me :( )

    I suspect that maybe Mark is saying the same as I am, but he has not said so and in the article he comes across as demoting MA. I think he should have qualified his commentary somewhat to lessen this impression.

    Okay I understand it can be messy, but is there really any excuse anymore for neglecting basics such as random allocation, double blinding, using a proper placebo control, using an N that has at least a chance of producing a reliable result etc.
    Do you see any role at all for *licencing* trials?

    Nothing is wrong with “prior probability”. Sorry for giving that5 impression. Zero PP probably means you shouldn’t bother subjecting the treatment to a clinical trial (unless the treatment is already in widespread use). A low PP means the evidence from your clinical trials must be correspondingly high (you’re mot going to overturmn science with a marginal result).

  30. antipodean says:

    I’m going to cut and paste my previous explanation for what a practicing clinician, such as Mark, is probably thinking (not wanting to put word in his mouth) when he says that MAs are not automatically the be-all and end-all of making good clinical decisions.

    “You have to be able to critically evaluate your evidence when you are applying it to an individual patient right now. If the MA is based on crappy RCTs or isn’t quite relevant to your patient then it is not at the top of the evidence hierarchy and you’ll need to look for other flavours of evidence.”

    It’s 1416 articles. Not necessarily studies. Not necessaily about this drug or about this disease. Not necessarily studies of effectiveness of any type. Not necessarily studies of humans even. Many studies of the pharmacodynamics etc. Not that many actual clinical trials. This is completely normal when doing a systematic review and subsequent MA. But there are studies that are ignored by MA that are of immediate clinical utility and shouldn’t be ignored by a practicing clinician.

    Cohort studies in humans exposed to the drug are very important for clinicians to guage side-effects, for instance. They won’t be in an MA but they are still valuable decision making aids. Cohort studies in untreated patients give you the natural history (i.e. the harm in not treating)- also important, also not in an MA.

    To answer your question I don’t see a role for licensing of trials as this simply introduces a fourth or fifth layer of peer-review. The marginal cost-benefit of this is unclear to me. Registration, on the other hand, I am a staunch defender of.

    FDA in some cases may already do something similar when it wants additiona info from a company before licensing a drug/implantable device.

  31. BillyJoe says:

    antipodean,

    Okay, at this stage I’m just thinking out loud and don’t expect a reply to this post. Maybe I need to keep my views on the back burner and see how it pans out as further information comes my way.

    “You have to be able to critically evaluate your evidence when you are applying it to an individual patient right now. If the MA is based on crappy RCTs or isn’t quite relevant to your patient then it is not at the top of the evidence hierarchy and you’ll need to look for other flavours of evidence.”

    Okay, I understand that he is probably criticising the quality of many MAs extending back into the past (hopefully, as you imply, this is no longer the case) rather than criticising MAs as a concept.

    “But there are studies that are ignored by MA that are of immediate clinical utility and shouldn’t be ignored by a practicing clinician.”

    So, what’s going on here. Are there no guidelines for what should be included? Is it just arbitrary and up to the person performing the MA to decide what should be included? Surely the MA should include all RCT without methodological flaws sufficient to invalidate the conclusions

    “Cohort studies [of treated and untreated patients are] also important”

    But as accurate as a methodologically sound RCT?

    “To answer your question I don’t see a role for licensing of trials as this simply introduces a fourth or fifth layer of peer-review. The marginal cost-benefit of this is unclear to me.”

    It would be the first layer, possibly making many of the other layers unnecessary. Surely it must be a good thing to start with methodoloigally sound (as far as that is possible) RCTs and go from there, rather than have to trawl through and evaluate each and every one of them for inclusion of your MA.

    “FDA in some cases may already do something similar when it wants additiona info from a company before licensing a drug/implantable device.”

    But it would already have the answer if the preceding trials were licensed

  32. antipodean says:

    BillyJoe

    I’m starting to wonder whether you are arguing in good faith here because you didn’t read/have ignored my comment and you have quote mined the hell out of it and then thrown it up as a strawman.

    There are very important pieces of info in studies that do not qualify for MAs. They are not RCTs. But then RCTs are not able to answer all clinically relevant questions (go back and read the bit about natural history). There are different EBM hierarchies for different clinical questions. In terms of treatment effectiveness the RCT and subsequent MS’s are supreme, as you keep correctly asserting to the wrong questions. But if the patients in the MA don’t match your patient then you still have to apply to a wider outlook.

    Licensing would add a 4th or 5th layer of peer review. Not the first layer. It would be about the 3rd layer after initial design and possibly funding. And it is already what the FDA sometimes do. And you won’t explain what the marginal benefit of this would be. Plus the FDA can’t have the answer to a question that has just been posed by having a trial ‘licensed’ previously to answer a different question through you hypothetical 20/20 foresight licensing program.

  33. antipodean says:

    P.S. The parachute MA is a great example of when you don’t need RCTs.

  34. BillyJoe says:

    antipodean,

    I am presently reading through some interesting articles on the SBM site which seem to me to be relevat to this topic. There seems to be something I do not understand or am confused about which is probably why I seem to be ignoring some of your questions.
    Which is why I said in my last post:

    “Okay, at this stage I’m just thinking out loud and don’t expect a reply to this post. Maybe I need to keep my views on the back burner and see how it pans out as further information comes my way.”

    I will come back later to see if I can make sense of all this though I probably won’t add a comment because old posts quickly go off the radar around here.
    (It’s a pity we can’t get email notifications that bring us back to where we left off :()

Comments are closed.