Articles

Archive for Clinical Trials

Isagenix Study Is Not Convincing

Isagenix is a wellness system sold by multilevel marketing. It consists of a suite of products to be used in various combinations for “nutritional cleansing,” detoxification, and supplementation to aid in weight loss, improve energy and performance, and support healthy aging. It allegedly burns fat while supporting lean muscle, maintains healthy cholesterol levels, supports telomeres, improves resistance to illness, reduces cravings, improves body composition, and slows the aging process. And makes millions for distributors who got on the bandwagon early and are high on the pyramid.

I have written about it before and have been roundly criticized by its proponents.   It generated my all-time favorite insult: “Dr Harriet Hall is a refrigerator with a head.”

My biggest concern with Isagenix was that it had not been clinically tested. They claimed that clinical tests were in progress (funded by Isagenix).  An e-mail correspondent recently told me I should take another look at Isagenix, since a clinical study had been completed. It had not yet been published, and I asked her to get back to me when it was. Ask and you shall receive (but you may be sorry!). She contacted me when the study by Kroeger et al. was published in the journal Nutrition and Metabolism.   The full study is available online and I urge readers to click on the link and look at Table 2, which I will be referring to later. The journal is peer-reviewed but, as will become painfully obvious, the peer reviewers did not do a competent job. It is an open-access online journal with a low impact factor. The authors had to pay to get their article published: it cost them $1805.

(more…)

Posted in: Clinical Trials, Herbs & Supplements

Leave a Comment (17) →

The NIH funding process: “Conformity” and “mediocrity”?

When we refer to “science-based medicine” (SBM), it is a very conscious choice to emphasize that good medicine should be based on a solid foundation of science. The name was coined to contrast the difference between the current evidence-based medicine (EBM) paradigm, which fetishizes randomized clinical trial evidence above all else and frequently ignores prior plausibility based on well-established basic science, and the SBM paradigm, which takes prior plausibility into account. The purpose of this post will not be to resurrect old discussions on these differences, but before I attend to the study at hand I bring this up to emphasize that progress in science-based medicine requires progress in science. That means all levels of biological (and even non-biological) basic science, which forms the foundation upon which translational science and clinical trials can be built. Without a robust pipeline of basic science progress upon which to base translational research and clinical trials, progress in SBM will slow and even grind to a halt.

That’s why, in the U.S., the National Institutes of Health (NIH) is so critical. The NIH funds large amounts of biomedical research each year, which means that what the NIH will and will not fund can’t help but have a profound effect shaping the pipeline of the basic and preclinical research that ultimately leads to new treatments and cures. Moreover, NIH funding has a profound effect on the careers of biomedical researchers and clinician-scientists, as having the “gold standard” NIH grant known as the R01 is viewed as a prerequisite for tenure and promotion in many universities and academic medical centers. Certainly this is the case for basic scientists; for clinician-scientists, having an R01 is certainly highly prestigious, but less of a career-killer if an investigator is unable to secure one. That’s why NIH funding levels and how hard (or easy) it is to secure an NIH grant, particularly an R01, are perennial obsessions among those of us in the biomedical research field. It can’t be otherwise, given the centrality of the NIH to research in the U.S.
(more…)

Posted in: Basic Science, Clinical Trials, Politics and Regulation

Leave a Comment (7) →

What does a new drug cost? Part II: The productivity problem

A few weeks ago I reviewed Ben Goldacre’s new book, Bad Pharma, an examination of the pharmaceutical industry, and more broadly, of the way new drugs are discovered, developed and brought to market. As I have noted before, despite the very different health systems that exist around the world, we all rely on private, for-profit, pharmaceutical companies to supply drug products and also to bring newer, better therapies to market. It’s great when there are lots of new drugs appearing, and they’re affordable for consumers and health systems. But that doesn’t seem to be the case. Pipelines seem to be drying up, and the cost of new drugs is climbing. Manufacturers refer to the costs of drug development when explaining high drug prices: New drugs are expensive, we’re told, because developing drugs is a risky, costly, time consuming endeavor. The high prices for new treatments are the price of innovative new treatments, both now and in the future. Research and development (R&D) costs are used to argue against strategies that could reduce company profitability (and presumably, future R&D), be it hospitals refusing to pay high drug costs, or changing patent laws that will determine when a generic drug will be marketed.

The overall costs of R&D are not the focus in Goldacre’s book, receiving only a short mention in the afterword, where he refers to the estimate of £500 million to bring a drug to market as “mythical and overstated.” He’s not alone in his skepticism. There’s a fair number of papers and analyses that have attempted to come up with a “true” estimate, and some authors argue the industry does not describe the true costs accurately or transparently enough to allow for objective evaluations. Some develop models independently, based on publicly available data. All models, however, must incorporate a range of assumptions that can influence the output. Over a year ago I reviewed at a study by Light and Warburton, entitled Demythologizing the high costs of pharmaceutical research, which estimated R&D costs at a tiny $43.4 million per drug – not £500 million, or the $1 billion you may see quoted.  Their estimates, however, were based on a sequence of highly implausible assumptions, meaning the “average” drug development costs are almost certainly higher in the real world. But how much higher isn’t clear. There have been at least eleven different studies published that estimate costs. Methods used range from direct data collection to aggregate industry estimates. Given the higher costs of new drugs, having an understanding of the drivers of development costs can help us understand just how efficiently this industry is performing. There are good reasons to be critical of the pharmaceutical industry. Are R&D costs one of them?

(more…)

Posted in: Clinical Trials, Pharmaceuticals, Politics and Regulation

Leave a Comment (20) →

Now that Burzynski has gotten off in 2012, Burzynski The Movie will spawn a sequel in 2013

About a year ago, I became interested in a physician named Stanislaw Burzynski who has been treating cancer with compounds that he calls “antineoplastons” for over three decades without, in my opinion, ever having ever produced any compelling evidence that antineoplastons have significant anticancer activity. Although I had been vaguely aware of Burzynski and his activities, it was the first time that I had looked into them in a big way.

Having found very few skeptical, science-based takes on Burzynski and having noted that the Quackwatch entries on Burzynski (1, 2, 3) were hopelessly out of date, I wrote a trilogy of posts about him, starting with a review of an execrably bad movie made by a simultaneously credulous yet cynical independent writer, producer, and director named Eric Merola whose primary business, appropriately enough, is mainly marketing. The movie was Burzynski The Movie: Cancer Is A Serious Business, a “documentary” (and I’m being polite here) that I characterized at the time as a bad movie and bad P.R. In brief, I saw this movie as a hagiography, a propaganda film so ham-fisted that, if she were still alive, it would easily simultaneously make Leni Riefenstahl blush at its blatantness and feel nauseated how truly awful it was from a strictly film making standpoint. It was also chock full of highly dubious science, in particular Burzynski’s latest venture, which is to sell “personalized gene-targeted cancer therapy” similarly lacking in oncological insight, so much so that I observed at the time that it was as though Dr. Burzynski read a book called Personalized Cancer Therapy for Dummies and decided he is an expert in genomics-based tailoring of targeted therapies to individual cancer patients. Finally, I completed the trilogy by pointing out that lately Burzynski has been rebranding an orphan drug that showed mild to moderate promise as an anticancer therapy.
(more…)

Posted in: Book & movie reviews, Cancer, Clinical Trials

Leave a Comment (19) →

Journal of Clinical Oncology editorial: “Compelling” evidence acupuncture “may be” effective for cancer related fatigue

Journal of Clinical Oncology (JCO) is a high impact journal (JIF > 16)  that advertises itself as a “must read” for oncologists. Some cutting edge RCTs evaluating chemo and hormonal therapies have appeared there. But a past blog post gave dramatic examples of pseudoscience and plain nonsense to be found in JCO concerning psychoneuroimmunology (PNI) and, increasingly, integrative medicine and even integrations of integrative medicine and PNI. The prestige of JCO has made it a major focus for efforts to secure respectability and third-party payments for CAM treatments by promoting their scientific status and effectiveness.

Once articles are published in JCO, authors can escape critical commentary by simply refusing to respond, taking advantage of an editorial policy that requires a response in order for critical commentaries to be published. An author’s refusal to respond means criticism cannot be published.

Some of the most outrageous incursions of woo science into JCO are accompanied by editorials that enjoy further relaxation of any editorial restraint  and peer review. Accompanying editorials are a form of privileged access publishing, often written by reviewers who have strongly recommended the article for publication, and having their own PNI and CAM studies to promote with citation in JCO.

Because of strict space limitations, controversial statements can simply be declared, rather than elaborated in arguments in which holes could be poked. A faux authority is created. Once claims make it into JCO, their sources are forgotten and only the appearance a “must read,” high impact journal is remembered. A shoddy form of scholarship becomes possible in which JCO can be cited for statements that would be recognized as ridiculous if accompanied by a citation of the origin in a CAM journal. And what readers track down and examine original sources for numbered citations, anyway?
(more…)

Posted in: Acupuncture, Cancer, Clinical Trials, Energy Medicine, Neuroscience/Mental Health, Traditional Chinese Medicine

Leave a Comment (13) →

Ecstasy for PTSD: Not Ready for Prime Time

Hundreds of desperate combat veterans with Post-Traumatic Stress Disorder (PTSD) are reportedly seeking experimental treatment with an illegal drug from a husband-wife team in South Carolina. The Bonhoefers recently published a study showing that adding MDMA (ecstasy, the party drug) to psychotherapy was effective in eliminating or greatly reducing the symptoms of refractory PTSD. It was widely covered in the media, for instance in this article in the NY Times. It was only a small preliminary study, and the treatment is not yet ready for prime time; but media reports have sparked enthusiasm not justified by the evidence. (more…)

Posted in: Clinical Trials, Neuroscience/Mental Health

Leave a Comment (24) →

Anecdotes: Cheaper by the Dozen

A loan officer sets up a meeting with an aspiring entrepreneur to inform him that his application has been denied. “Mr Smith, we have reviewed your application and found a fatal flaw in your business plan. You say that you will be selling your donuts for 60 cents apiece. “Yes” says Mr. Smith, “that is significantly less than any other baker in town. This will give my business a significant competitive advantage!” The loan officer replies, “According to your budget, at peak efficiency the cost of supplies to make each donut is 75 cents, you will lose 15 cents on every donut you sell. A look of relief comes over Mr. Smith’s face as he realizes the loan officer’s misunderstanding. He leans in closer, and whispers to the loan officer “But don’t you see, I’ll make it up in volume.”

If you find this narrative at all amusing, it is likely because Mr. Smith is oblivious to what seems like an obvious flaw in his logic.

A similar error in logic is made by those who rely on anecdote and other intrinsically biased information to understand the natural world. If one anecdote is biased, a collection of 12 or 1000 anecdotes multiplies the bias, and will likely reinforces an errant conclusion. When it comes to bias, you can’t make it up in volume. Volume makes it worse!

Unfortunately human beings are intrinsically vulnerable to bias. In most day to day decisions, like choosing which brand of toothpaste to buy, or which route to drive to work, these biases are of little importance. In making critical decisions, like assessing the effectiveness of a new treatment for cancer, these biases may make the difference between life and death. The scientific method is defined by a system of practices that aim to minimize bias from the assessment of a problem.

Bias, in general, is tendency that prevents unpredjudiced consideration of a question (paraphrased from dictionary.com). Researchers describe sources of bias as systematic errors. A few words about random and systematic errors will make this description clearer.
(more…)

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (31) →

Getting NCCAM’s money’s worth: Some results of NCCAM-funded studies of homeopathy

As hard as it is to believe, the Science-Based Medicine blog that you’re so eagerly reading is fast approaching its fifth anniversary of existence. The very first post here was a statement of purpose by Steve Novella on January 1, 2008, and my very first post was a somewhat rambling introduction that in retrospect is mildly embarrassing to me. It is what it is, however. The reason I mention this is because I want to take a trip down memory lane in order to follow up on one of my earliest posts for SBM, which was entitled The National Center for Complementary and Alternative Medicine (NCCAM): Your tax dollars hard at work. Specifically, I want to follow up on one specific study I mentioned that was funded by NCCAM.

Even though I not-so-humbly think that, even nearly five years later, my original post is worth reading in its entirety (weighing in at only 3,394 words, it’s even rather short—for me, at least), I’ll spare you that and cut straight to the chase, the better to discuss the study. It is a study of homeopathy. Yes, in contrast to the protestations of Dr. Josephine Briggs, the current director of NCCAM, that NCCAM doesn’t fund studies of such pure pseudoscience as homeopathy anymore (although she does apparently meet with homeopaths for “balance”), prior to Dr. Briggs’ tenure NCCAM actually did fund studies of the magic water with mystical memory known as homeopathy. Two grants in particular I singled out for scorn. The principal investigator for both grants was Iris Bell, who is faculty at Andrew Weil’s center of woo at the University of Arizona. The first was an R21 grant for a project entitled Polysomnography in homeopathic remedy effects (NIH grant 1 R21 AT000388).
(more…)

Posted in: Basic Science, Clinical Trials, Homeopathy

Leave a Comment (11) →

“Moneyball,” the 2012 election, and science- and evidence-based medicine

Regular readers of my other blog probably know that I’m into more than just science, skepticism, and promoting science-based medicine (SBM). I’m also into science fiction, computers, and baseball, not to mention politics (at least more than average). That’s why our recent election, coming as it did hot on the heels of the World Series in which my beloved Detroit Tigers utterly choked got me to thinking. Actually, it was more than just that. It was also an article that appeared a couple of weeks before the election in the New England Journal of Medicine entitled Moneyball and Medicine, by Christopher J. Phillips, PhD, Jeremy A. Greene, MD, PhD, and Scott H. Podolsky, MD. In it, they compare what they call “evidence-based” baseball to “evidence-based medicine,” something that is not as far-fetched as one might think.

“Moneyball,” as baseball fans know, refers to a book by Michael Lewis entitled Moneyball: The Art of Winning an Unfair Game. Published in 2003, Moneyball is the story of the Oakland Athletics and their manager Billy Beane and how the A’s managed to field a competitive team even though the organization was—shall we say?—”revenue challenged” compared to big market teams like the New York Yankees. The central premise of the book was that that the collective wisdom of baseball leaders, such as managers, coaches, scouts, owners, and general managers, was flawed and too subjective. Using rigorous statistical analysis, the A’s front office determined various metrics that were better predictors of offensive success than previously used indicators. For example, conventional wisdom at the time valued stolen bases, runs batted in, and batting average, but the A’s determined that on-base percentage and slugging percentage were better predictors, and cheaper to obtain on the free market, to boot. As a result, the 2002 Athletics, with a payroll of $41 million (the third lowest in baseball), were able to compete in the market against teams like the Yankees, which had a payroll of $125 million. The book also discussed the A’s farm system and how it determined which players were more likely to develop into solid major league players, as well as the history of sabermetric analysis, a term coined by one of its pioneers Bill James after SABR, the Society for American Baseball Research. Sabermetrics is basically concerned with determining the value of a player or team in current or past seasons and with predicting the value of a player or team in the future.
(more…)

Posted in: Clinical Trials, Politics and Regulation, Science and Medicine, Science and the Media

Leave a Comment (47) →

It’s time for true transparency of clinical trials data

What makes a health professional science-based? We advocate for evaluations of treatments, and treatment decisions, based on the best research methods. We compile evidence based on fair trials that minimize the risks of bias. And, importantly, we consider this evidence in the context of the plausibility of the treatment. The fact is, it’s actually not that hard to get a positive result in a trial, especially when it’s sloppily done or biased.  And there are many ways to design a trial to demonstrate positive results in some subgroup, as Kimball Atwood pointed out earlier this week. And even when a trial is well done, there remains the risk of error simply due to chance alone. So to sort out true treatment effects, from fake effects, two key steps are helpful in reviewing the evidence.

1. Take prior probability into account when assessing data. While a detailed explanation of Bayes Theorem could take several posts, consider prior probability this way: Any test has flaws and limitations. Tests give probabilities based on the test method itself, not on what is being tested. Consequently, in order to evaluate the probability of “x” given a test result, we must incorporate the pre-test probability of “x”. Bayesian analysis uses any existing data, plus the data collected in the test, to give a prediction that factors in prior probabilities. It’s part of the reason why most published research findings are false.

2. Use systematic reviews to evaluate all the evidence. The best way to answer a specific clinical question is to collect all the potentially relevant information in a structured way, consider its quality, analyze it according to predetermined criteria, and then draw conclusions. A systematic review reduces the risk of cherry picking and author bias, compared to non-systematic data-collection or general literature reviews of evidence. A well-conducted systematic review will give us an answer based on the totality of evidence available, and is the best possible answer for a given question.

These two steps are critically important, and so have been discussed repeatedly by the contributors to this blog. What is obvious, but perhaps not as well understood, is how our reviews can still be significantly flawed, despite best efforts. In order for our evaluation to accurately consider prior probability, and to be systematic, we need all the evidence. Unfortunately, that’s not always possible if clinical trials remains unpublished or are otherwise inaccessible. There is good evidence to show that negative studies are less likely to be published than positive studies. Sometimes called the “file drawer” effect, it’s not solely the fault of investigators, as journals seeking positive results may decline to publish negative studies. But unless these studies are found, systematic reviews are more likely to miss negative data, which means there’s the risk of bias in favor of an intervention. How bad is the problem? We really have no complete way to know, for any particular clinical question, just how much is missing or buried. This is a problem that has confounded researchers and authors of systematic reviews for decades. (more…)

Posted in: Clinical Trials, Politics and Regulation

Leave a Comment (18) →
Page 10 of 32 «...89101112...»