Archive for Clinical Trials

Now that Burzynski has gotten off in 2012, Burzynski The Movie will spawn a sequel in 2013

About a year ago, I became interested in a physician named Stanislaw Burzynski who has been treating cancer with compounds that he calls “antineoplastons” for over three decades without, in my opinion, ever having ever produced any compelling evidence that antineoplastons have significant anticancer activity. Although I had been vaguely aware of Burzynski and his activities, it was the first time that I had looked into them in a big way.

Having found very few skeptical, science-based takes on Burzynski and having noted that the Quackwatch entries on Burzynski (1, 2, 3) were hopelessly out of date, I wrote a trilogy of posts about him, starting with a review of an execrably bad movie made by a simultaneously credulous yet cynical independent writer, producer, and director named Eric Merola whose primary business, appropriately enough, is mainly marketing. The movie was Burzynski The Movie: Cancer Is A Serious Business, a “documentary” (and I’m being polite here) that I characterized at the time as a bad movie and bad P.R. In brief, I saw this movie as a hagiography, a propaganda film so ham-fisted that, if she were still alive, it would easily simultaneously make Leni Riefenstahl blush at its blatantness and feel nauseated how truly awful it was from a strictly film making standpoint. It was also chock full of highly dubious science, in particular Burzynski’s latest venture, which is to sell “personalized gene-targeted cancer therapy” similarly lacking in oncological insight, so much so that I observed at the time that it was as though Dr. Burzynski read a book called Personalized Cancer Therapy for Dummies and decided he is an expert in genomics-based tailoring of targeted therapies to individual cancer patients. Finally, I completed the trilogy by pointing out that lately Burzynski has been rebranding an orphan drug that showed mild to moderate promise as an anticancer therapy.

Posted in: Book & movie reviews, Cancer, Clinical Trials

Leave a Comment (19) →

Journal of Clinical Oncology editorial: “Compelling” evidence acupuncture “may be” effective for cancer related fatigue

Journal of Clinical Oncology (JCO) is a high impact journal (JIF > 16)  that advertises itself as a “must read” for oncologists. Some cutting edge RCTs evaluating chemo and hormonal therapies have appeared there. But a past blog post gave dramatic examples of pseudoscience and plain nonsense to be found in JCO concerning psychoneuroimmunology (PNI) and, increasingly, integrative medicine and even integrations of integrative medicine and PNI. The prestige of JCO has made it a major focus for efforts to secure respectability and third-party payments for CAM treatments by promoting their scientific status and effectiveness.

Once articles are published in JCO, authors can escape critical commentary by simply refusing to respond, taking advantage of an editorial policy that requires a response in order for critical commentaries to be published. An author’s refusal to respond means criticism cannot be published.

Some of the most outrageous incursions of woo science into JCO are accompanied by editorials that enjoy further relaxation of any editorial restraint  and peer review. Accompanying editorials are a form of privileged access publishing, often written by reviewers who have strongly recommended the article for publication, and having their own PNI and CAM studies to promote with citation in JCO.

Because of strict space limitations, controversial statements can simply be declared, rather than elaborated in arguments in which holes could be poked. A faux authority is created. Once claims make it into JCO, their sources are forgotten and only the appearance a “must read,” high impact journal is remembered. A shoddy form of scholarship becomes possible in which JCO can be cited for statements that would be recognized as ridiculous if accompanied by a citation of the origin in a CAM journal. And what readers track down and examine original sources for numbered citations, anyway?

Posted in: Acupuncture, Cancer, Clinical Trials, Energy Medicine, Neuroscience/Mental Health, Traditional Chinese Medicine

Leave a Comment (13) →

Ecstasy for PTSD: Not Ready for Prime Time

Hundreds of desperate combat veterans with Post-Traumatic Stress Disorder (PTSD) are reportedly seeking experimental treatment with an illegal drug from a husband-wife team in South Carolina. The Bonhoefers recently published a study showing that adding MDMA (ecstasy, the party drug) to psychotherapy was effective in eliminating or greatly reducing the symptoms of refractory PTSD. It was widely covered in the media, for instance in this article in the NY Times. It was only a small preliminary study, and the treatment is not yet ready for prime time; but media reports have sparked enthusiasm not justified by the evidence. (more…)

Posted in: Clinical Trials, Neuroscience/Mental Health

Leave a Comment (24) →

Anecdotes: Cheaper by the Dozen

A loan officer sets up a meeting with an aspiring entrepreneur to inform him that his application has been denied. “Mr Smith, we have reviewed your application and found a fatal flaw in your business plan. You say that you will be selling your donuts for 60 cents apiece. “Yes” says Mr. Smith, “that is significantly less than any other baker in town. This will give my business a significant competitive advantage!” The loan officer replies, “According to your budget, at peak efficiency the cost of supplies to make each donut is 75 cents, you will lose 15 cents on every donut you sell. A look of relief comes over Mr. Smith’s face as he realizes the loan officer’s misunderstanding. He leans in closer, and whispers to the loan officer “But don’t you see, I’ll make it up in volume.”

If you find this narrative at all amusing, it is likely because Mr. Smith is oblivious to what seems like an obvious flaw in his logic.

A similar error in logic is made by those who rely on anecdote and other intrinsically biased information to understand the natural world. If one anecdote is biased, a collection of 12 or 1000 anecdotes multiplies the bias, and will likely reinforces an errant conclusion. When it comes to bias, you can’t make it up in volume. Volume makes it worse!

Unfortunately human beings are intrinsically vulnerable to bias. In most day to day decisions, like choosing which brand of toothpaste to buy, or which route to drive to work, these biases are of little importance. In making critical decisions, like assessing the effectiveness of a new treatment for cancer, these biases may make the difference between life and death. The scientific method is defined by a system of practices that aim to minimize bias from the assessment of a problem.

Bias, in general, is tendency that prevents unpredjudiced consideration of a question (paraphrased from Researchers describe sources of bias as systematic errors. A few words about random and systematic errors will make this description clearer.

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (31) →

Getting NCCAM’s money’s worth: Some results of NCCAM-funded studies of homeopathy

As hard as it is to believe, the Science-Based Medicine blog that you’re so eagerly reading is fast approaching its fifth anniversary of existence. The very first post here was a statement of purpose by Steve Novella on January 1, 2008, and my very first post was a somewhat rambling introduction that in retrospect is mildly embarrassing to me. It is what it is, however. The reason I mention this is because I want to take a trip down memory lane in order to follow up on one of my earliest posts for SBM, which was entitled The National Center for Complementary and Alternative Medicine (NCCAM): Your tax dollars hard at work. Specifically, I want to follow up on one specific study I mentioned that was funded by NCCAM.

Even though I not-so-humbly think that, even nearly five years later, my original post is worth reading in its entirety (weighing in at only 3,394 words, it’s even rather short—for me, at least), I’ll spare you that and cut straight to the chase, the better to discuss the study. It is a study of homeopathy. Yes, in contrast to the protestations of Dr. Josephine Briggs, the current director of NCCAM, that NCCAM doesn’t fund studies of such pure pseudoscience as homeopathy anymore (although she does apparently meet with homeopaths for “balance”), prior to Dr. Briggs’ tenure NCCAM actually did fund studies of the magic water with mystical memory known as homeopathy. Two grants in particular I singled out for scorn. The principal investigator for both grants was Iris Bell, who is faculty at Andrew Weil’s center of woo at the University of Arizona. The first was an R21 grant for a project entitled Polysomnography in homeopathic remedy effects (NIH grant 1 R21 AT000388).

Posted in: Basic Science, Clinical Trials, Homeopathy

Leave a Comment (11) →

“Moneyball,” the 2012 election, and science- and evidence-based medicine

Regular readers of my other blog probably know that I’m into more than just science, skepticism, and promoting science-based medicine (SBM). I’m also into science fiction, computers, and baseball, not to mention politics (at least more than average). That’s why our recent election, coming as it did hot on the heels of the World Series in which my beloved Detroit Tigers utterly choked got me to thinking. Actually, it was more than just that. It was also an article that appeared a couple of weeks before the election in the New England Journal of Medicine entitled Moneyball and Medicine, by Christopher J. Phillips, PhD, Jeremy A. Greene, MD, PhD, and Scott H. Podolsky, MD. In it, they compare what they call “evidence-based” baseball to “evidence-based medicine,” something that is not as far-fetched as one might think.

“Moneyball,” as baseball fans know, refers to a book by Michael Lewis entitled Moneyball: The Art of Winning an Unfair Game. Published in 2003, Moneyball is the story of the Oakland Athletics and their manager Billy Beane and how the A’s managed to field a competitive team even though the organization was—shall we say?—”revenue challenged” compared to big market teams like the New York Yankees. The central premise of the book was that that the collective wisdom of baseball leaders, such as managers, coaches, scouts, owners, and general managers, was flawed and too subjective. Using rigorous statistical analysis, the A’s front office determined various metrics that were better predictors of offensive success than previously used indicators. For example, conventional wisdom at the time valued stolen bases, runs batted in, and batting average, but the A’s determined that on-base percentage and slugging percentage were better predictors, and cheaper to obtain on the free market, to boot. As a result, the 2002 Athletics, with a payroll of $41 million (the third lowest in baseball), were able to compete in the market against teams like the Yankees, which had a payroll of $125 million. The book also discussed the A’s farm system and how it determined which players were more likely to develop into solid major league players, as well as the history of sabermetric analysis, a term coined by one of its pioneers Bill James after SABR, the Society for American Baseball Research. Sabermetrics is basically concerned with determining the value of a player or team in current or past seasons and with predicting the value of a player or team in the future.

Posted in: Clinical Trials, Politics and Regulation, Science and Medicine, Science and the Media

Leave a Comment (47) →

It’s time for true transparency of clinical trials data

What makes a health professional science-based? We advocate for evaluations of treatments, and treatment decisions, based on the best research methods. We compile evidence based on fair trials that minimize the risks of bias. And, importantly, we consider this evidence in the context of the plausibility of the treatment. The fact is, it’s actually not that hard to get a positive result in a trial, especially when it’s sloppily done or biased.  And there are many ways to design a trial to demonstrate positive results in some subgroup, as Kimball Atwood pointed out earlier this week. And even when a trial is well done, there remains the risk of error simply due to chance alone. So to sort out true treatment effects, from fake effects, two key steps are helpful in reviewing the evidence.

1. Take prior probability into account when assessing data. While a detailed explanation of Bayes Theorem could take several posts, consider prior probability this way: Any test has flaws and limitations. Tests give probabilities based on the test method itself, not on what is being tested. Consequently, in order to evaluate the probability of “x” given a test result, we must incorporate the pre-test probability of “x”. Bayesian analysis uses any existing data, plus the data collected in the test, to give a prediction that factors in prior probabilities. It’s part of the reason why most published research findings are false.

2. Use systematic reviews to evaluate all the evidence. The best way to answer a specific clinical question is to collect all the potentially relevant information in a structured way, consider its quality, analyze it according to predetermined criteria, and then draw conclusions. A systematic review reduces the risk of cherry picking and author bias, compared to non-systematic data-collection or general literature reviews of evidence. A well-conducted systematic review will give us an answer based on the totality of evidence available, and is the best possible answer for a given question.

These two steps are critically important, and so have been discussed repeatedly by the contributors to this blog. What is obvious, but perhaps not as well understood, is how our reviews can still be significantly flawed, despite best efforts. In order for our evaluation to accurately consider prior probability, and to be systematic, we need all the evidence. Unfortunately, that’s not always possible if clinical trials remains unpublished or are otherwise inaccessible. There is good evidence to show that negative studies are less likely to be published than positive studies. Sometimes called the “file drawer” effect, it’s not solely the fault of investigators, as journals seeking positive results may decline to publish negative studies. But unless these studies are found, systematic reviews are more likely to miss negative data, which means there’s the risk of bias in favor of an intervention. How bad is the problem? We really have no complete way to know, for any particular clinical question, just how much is missing or buried. This is a problem that has confounded researchers and authors of systematic reviews for decades. (more…)

Posted in: Clinical Trials, Politics and Regulation

Leave a Comment (18) →

The result of the Trial to Assess Chelation Therapy (TACT): As underwhelming as expected

Chelation therapy.

It’s one of the most common quackeries out there, used by a wide variety of practitioners for a wide variety of ailments blamed on “heavy metal toxicity.” Chelation therapy, which involves using chemicals that can bind to the metal ions and allow them to be excreted by the kidneys, is actually standard therapy for certain types of acute heavy metal poisoning, such as iron overload due to transfusion, aluminum overload due to hemodialysis, copper toxicity due to Wilson’s disease, acute heavy metal toxicity, and a handful of other indications.

My personal interest in chelation therapy developed out of its use by unscrupulous practitioners who blamed autism on the mercury-containing thimerosal preservative that used to be in many childhood vaccines until 2001 but has since all but disappeared from such vaccines except for one vaccine (the flu vaccine, for which a thimerosal-free alternative is available) and in trace amounts in some other vaccines. Mercury became a convenient bogeyman to add to the list of “toxins” antivaccinationists hype in vaccines. In fact, my very first post after I introduced myself on this very blog discussed the idea that mercury in vaccines was a significant cause of autism and autism spectrum disorders, and I’ve periodically written about such things ever since, in particular the bad science of Mark and David Geier, whose idea that chemical castration of children with Lupron “works” against “mercury-induced” autism is based on a chemically ridiculous idea that somehow testosterone binds mercury and makes it harder to chelate. Unfortunately, this particular autism quackery has real consequences and has been responsible for the death of a child.

Chelation isn’t just for autism, however. Despite many practitioners advertising it for autism, cancer (often with dubious studies that I might have to take a look at), Alzheimer’s disease (which Hugh Fudenberg has blamed on the flu vaccine, a claim parroted by Bill Maher, of course!), and just about every ailment under the sun, it’s easy to forget that the original use for chelation therapy promoted by “alternative medicine” practitioners was for cardiovascular disease. When it is used for coronary artery disease or autism, on a strictly stoichiometric and pharmacological basis, it is extremely implausible. Moreover, it is not without potential complications, including renal damage and cardiac arrhythmias due to sudden drops in calcium levels. Such arrhythmias can and have led to death in children, and in adults complications such as renal failure and death.

Posted in: Clinical Trials

Leave a Comment (29) →

The Trial to Assess Chelation Therapy: Equivocal as Predicted

The ill-advised, NIH-sponsored Trial to Assess Chelation Therapy (TACT) is finally over. 839 human subjects were randomized to receive Na2EDTA infusions; 869 were randomized to receive placebo infusions. The results were announced at this weekend’s American Heart Association meeting in Los Angeles. In summary, the TACT authors report a slight advantage for chelation over placebo in the “primary composite endpoint,” a combination of five separate outcomes: death, myocardial infarction, stroke, coronary revascularization, and hospitalization for angina:


Although that result may seem intriguing, it becomes less so when the data are examined more carefully. First, it barely achieved the pre-ordained level of statistical significance, which was P=.036. Second, none of the individual components of the composite endpoint achieved statistical significance, and most of the absolute difference was in coronary revascularization–which is puzzling:


Posted in: Clinical Trials, Health Fraud, Medical Ethics, Politics and Regulation, Science and Medicine

Leave a Comment (8) →

NIH funds training in behavioral intervention to slow progression of cancer by improving the immune system

Editor’s note: Because of Dr. Gorski’s appearance at CSICon over the weekend, he will be taking this Monday off. Fortunately, Dr. Coyne will more than ably substitute. Enjoy!




NIH is funding free training in the delivery of the Cancer to Health (C2H) intervention package, billed as “the first evidence-based behavioral intervention designed to patients newly diagnosed with cancer that is available for specialty training.” The announcement for the training claims that C2H “yielded robust and enduring gains, including reductions in patients’ emotional distress, improvements in social support, treatment adherence (chemotherapy), health behaviors (diet, smoking), and symptoms and functional status, and reduced risk for cancer recurrence.” Is this really an “empirically supported treatment” and does it reduce risk of cancer recurrence?

Apparently the NIH peer review committee thought there was sufficient evidence fund this R25 training grant. Let’s look at the level of evidence for this intervention, an exercise that will highlight some of the pseudoscience and heavy-handed professional politics in promoting psychoneuroimmunological (PNI) interventions.


Posted in: Clinical Trials, Science and Medicine

Leave a Comment (19) →
Page 8 of 30 «...678910...»