Articles

Archive for Clinical Trials

Anecdotes: Cheaper by the Dozen

A loan officer sets up a meeting with an aspiring entrepreneur to inform him that his application has been denied. “Mr Smith, we have reviewed your application and found a fatal flaw in your business plan. You say that you will be selling your donuts for 60 cents apiece. “Yes” says Mr. Smith, “that is significantly less than any other baker in town. This will give my business a significant competitive advantage!” The loan officer replies, “According to your budget, at peak efficiency the cost of supplies to make each donut is 75 cents, you will lose 15 cents on every donut you sell. A look of relief comes over Mr. Smith’s face as he realizes the loan officer’s misunderstanding. He leans in closer, and whispers to the loan officer “But don’t you see, I’ll make it up in volume.”

If you find this narrative at all amusing, it is likely because Mr. Smith is oblivious to what seems like an obvious flaw in his logic.

A similar error in logic is made by those who rely on anecdote and other intrinsically biased information to understand the natural world. If one anecdote is biased, a collection of 12 or 1000 anecdotes multiplies the bias, and will likely reinforces an errant conclusion. When it comes to bias, you can’t make it up in volume. Volume makes it worse!

Unfortunately human beings are intrinsically vulnerable to bias. In most day to day decisions, like choosing which brand of toothpaste to buy, or which route to drive to work, these biases are of little importance. In making critical decisions, like assessing the effectiveness of a new treatment for cancer, these biases may make the difference between life and death. The scientific method is defined by a system of practices that aim to minimize bias from the assessment of a problem.

Bias, in general, is tendency that prevents unpredjudiced consideration of a question (paraphrased from dictionary.com). Researchers describe sources of bias as systematic errors. A few words about random and systematic errors will make this description clearer.
(more…)

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (31) →

Getting NCCAM’s money’s worth: Some results of NCCAM-funded studies of homeopathy

As hard as it is to believe, the Science-Based Medicine blog that you’re so eagerly reading is fast approaching its fifth anniversary of existence. The very first post here was a statement of purpose by Steve Novella on January 1, 2008, and my very first post was a somewhat rambling introduction that in retrospect is mildly embarrassing to me. It is what it is, however. The reason I mention this is because I want to take a trip down memory lane in order to follow up on one of my earliest posts for SBM, which was entitled The National Center for Complementary and Alternative Medicine (NCCAM): Your tax dollars hard at work. Specifically, I want to follow up on one specific study I mentioned that was funded by NCCAM.

Even though I not-so-humbly think that, even nearly five years later, my original post is worth reading in its entirety (weighing in at only 3,394 words, it’s even rather short—for me, at least), I’ll spare you that and cut straight to the chase, the better to discuss the study. It is a study of homeopathy. Yes, in contrast to the protestations of Dr. Josephine Briggs, the current director of NCCAM, that NCCAM doesn’t fund studies of such pure pseudoscience as homeopathy anymore (although she does apparently meet with homeopaths for “balance”), prior to Dr. Briggs’ tenure NCCAM actually did fund studies of the magic water with mystical memory known as homeopathy. Two grants in particular I singled out for scorn. The principal investigator for both grants was Iris Bell, who is faculty at Andrew Weil’s center of woo at the University of Arizona. The first was an R21 grant for a project entitled Polysomnography in homeopathic remedy effects (NIH grant 1 R21 AT000388).
(more…)

Posted in: Basic Science, Clinical Trials, Homeopathy

Leave a Comment (11) →

“Moneyball,” the 2012 election, and science- and evidence-based medicine

Regular readers of my other blog probably know that I’m into more than just science, skepticism, and promoting science-based medicine (SBM). I’m also into science fiction, computers, and baseball, not to mention politics (at least more than average). That’s why our recent election, coming as it did hot on the heels of the World Series in which my beloved Detroit Tigers utterly choked got me to thinking. Actually, it was more than just that. It was also an article that appeared a couple of weeks before the election in the New England Journal of Medicine entitled Moneyball and Medicine, by Christopher J. Phillips, PhD, Jeremy A. Greene, MD, PhD, and Scott H. Podolsky, MD. In it, they compare what they call “evidence-based” baseball to “evidence-based medicine,” something that is not as far-fetched as one might think.

“Moneyball,” as baseball fans know, refers to a book by Michael Lewis entitled Moneyball: The Art of Winning an Unfair Game. Published in 2003, Moneyball is the story of the Oakland Athletics and their manager Billy Beane and how the A’s managed to field a competitive team even though the organization was—shall we say?—”revenue challenged” compared to big market teams like the New York Yankees. The central premise of the book was that that the collective wisdom of baseball leaders, such as managers, coaches, scouts, owners, and general managers, was flawed and too subjective. Using rigorous statistical analysis, the A’s front office determined various metrics that were better predictors of offensive success than previously used indicators. For example, conventional wisdom at the time valued stolen bases, runs batted in, and batting average, but the A’s determined that on-base percentage and slugging percentage were better predictors, and cheaper to obtain on the free market, to boot. As a result, the 2002 Athletics, with a payroll of $41 million (the third lowest in baseball), were able to compete in the market against teams like the Yankees, which had a payroll of $125 million. The book also discussed the A’s farm system and how it determined which players were more likely to develop into solid major league players, as well as the history of sabermetric analysis, a term coined by one of its pioneers Bill James after SABR, the Society for American Baseball Research. Sabermetrics is basically concerned with determining the value of a player or team in current or past seasons and with predicting the value of a player or team in the future.
(more…)

Posted in: Clinical Trials, Politics and Regulation, Science and Medicine, Science and the Media

Leave a Comment (47) →

It’s time for true transparency of clinical trials data

What makes a health professional science-based? We advocate for evaluations of treatments, and treatment decisions, based on the best research methods. We compile evidence based on fair trials that minimize the risks of bias. And, importantly, we consider this evidence in the context of the plausibility of the treatment. The fact is, it’s actually not that hard to get a positive result in a trial, especially when it’s sloppily done or biased.  And there are many ways to design a trial to demonstrate positive results in some subgroup, as Kimball Atwood pointed out earlier this week. And even when a trial is well done, there remains the risk of error simply due to chance alone. So to sort out true treatment effects, from fake effects, two key steps are helpful in reviewing the evidence.

1. Take prior probability into account when assessing data. While a detailed explanation of Bayes Theorem could take several posts, consider prior probability this way: Any test has flaws and limitations. Tests give probabilities based on the test method itself, not on what is being tested. Consequently, in order to evaluate the probability of “x” given a test result, we must incorporate the pre-test probability of “x”. Bayesian analysis uses any existing data, plus the data collected in the test, to give a prediction that factors in prior probabilities. It’s part of the reason why most published research findings are false.

2. Use systematic reviews to evaluate all the evidence. The best way to answer a specific clinical question is to collect all the potentially relevant information in a structured way, consider its quality, analyze it according to predetermined criteria, and then draw conclusions. A systematic review reduces the risk of cherry picking and author bias, compared to non-systematic data-collection or general literature reviews of evidence. A well-conducted systematic review will give us an answer based on the totality of evidence available, and is the best possible answer for a given question.

These two steps are critically important, and so have been discussed repeatedly by the contributors to this blog. What is obvious, but perhaps not as well understood, is how our reviews can still be significantly flawed, despite best efforts. In order for our evaluation to accurately consider prior probability, and to be systematic, we need all the evidence. Unfortunately, that’s not always possible if clinical trials remains unpublished or are otherwise inaccessible. There is good evidence to show that negative studies are less likely to be published than positive studies. Sometimes called the “file drawer” effect, it’s not solely the fault of investigators, as journals seeking positive results may decline to publish negative studies. But unless these studies are found, systematic reviews are more likely to miss negative data, which means there’s the risk of bias in favor of an intervention. How bad is the problem? We really have no complete way to know, for any particular clinical question, just how much is missing or buried. This is a problem that has confounded researchers and authors of systematic reviews for decades. (more…)

Posted in: Clinical Trials, Politics and Regulation

Leave a Comment (18) →

The result of the Trial to Assess Chelation Therapy (TACT): As underwhelming as expected

Chelation therapy.

It’s one of the most common quackeries out there, used by a wide variety of practitioners for a wide variety of ailments blamed on “heavy metal toxicity.” Chelation therapy, which involves using chemicals that can bind to the metal ions and allow them to be excreted by the kidneys, is actually standard therapy for certain types of acute heavy metal poisoning, such as iron overload due to transfusion, aluminum overload due to hemodialysis, copper toxicity due to Wilson’s disease, acute heavy metal toxicity, and a handful of other indications.

My personal interest in chelation therapy developed out of its use by unscrupulous practitioners who blamed autism on the mercury-containing thimerosal preservative that used to be in many childhood vaccines until 2001 but has since all but disappeared from such vaccines except for one vaccine (the flu vaccine, for which a thimerosal-free alternative is available) and in trace amounts in some other vaccines. Mercury became a convenient bogeyman to add to the list of “toxins” antivaccinationists hype in vaccines. In fact, my very first post after I introduced myself on this very blog discussed the idea that mercury in vaccines was a significant cause of autism and autism spectrum disorders, and I’ve periodically written about such things ever since, in particular the bad science of Mark and David Geier, whose idea that chemical castration of children with Lupron “works” against “mercury-induced” autism is based on a chemically ridiculous idea that somehow testosterone binds mercury and makes it harder to chelate. Unfortunately, this particular autism quackery has real consequences and has been responsible for the death of a child.

Chelation isn’t just for autism, however. Despite many practitioners advertising it for autism, cancer (often with dubious studies that I might have to take a look at), Alzheimer’s disease (which Hugh Fudenberg has blamed on the flu vaccine, a claim parroted by Bill Maher, of course!), and just about every ailment under the sun, it’s easy to forget that the original use for chelation therapy promoted by “alternative medicine” practitioners was for cardiovascular disease. When it is used for coronary artery disease or autism, on a strictly stoichiometric and pharmacological basis, it is extremely implausible. Moreover, it is not without potential complications, including renal damage and cardiac arrhythmias due to sudden drops in calcium levels. Such arrhythmias can and have led to death in children, and in adults complications such as renal failure and death.
(more…)

Posted in: Clinical Trials

Leave a Comment (29) →

The Trial to Assess Chelation Therapy: Equivocal as Predicted

The ill-advised, NIH-sponsored Trial to Assess Chelation Therapy (TACT) is finally over. 839 human subjects were randomized to receive Na2EDTA infusions; 869 were randomized to receive placebo infusions. The results were announced at this weekend’s American Heart Association meeting in Los Angeles. In summary, the TACT authors report a slight advantage for chelation over placebo in the “primary composite endpoint,” a combination of five separate outcomes: death, myocardial infarction, stroke, coronary revascularization, and hospitalization for angina:

 

Although that result may seem intriguing, it becomes less so when the data are examined more carefully. First, it barely achieved the pre-ordained level of statistical significance, which was P=.036. Second, none of the individual components of the composite endpoint achieved statistical significance, and most of the absolute difference was in coronary revascularization–which is puzzling:

(more…)

Posted in: Clinical Trials, Health Fraud, Medical Ethics, Politics and Regulation, Science and Medicine

Leave a Comment (8) →

NIH funds training in behavioral intervention to slow progression of cancer by improving the immune system

Editor’s note: Because of Dr. Gorski’s appearance at CSICon over the weekend, he will be taking this Monday off. Fortunately, Dr. Coyne will more than ably substitute. Enjoy!

 

texasharpshooterjpg-265x161

 

NIH is funding free training in the delivery of the Cancer to Health (C2H) intervention package, billed as “the first evidence-based behavioral intervention designed to patients newly diagnosed with cancer that is available for specialty training.” The announcement for the training claims that C2H “yielded robust and enduring gains, including reductions in patients’ emotional distress, improvements in social support, treatment adherence (chemotherapy), health behaviors (diet, smoking), and symptoms and functional status, and reduced risk for cancer recurrence.” Is this really an “empirically supported treatment” and does it reduce risk of cancer recurrence?

Apparently the NIH peer review committee thought there was sufficient evidence fund this R25 training grant. Let’s look at the level of evidence for this intervention, an exercise that will highlight some of the pseudoscience and heavy-handed professional politics in promoting psychoneuroimmunological (PNI) interventions.

(more…)

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (19) →

The Placebo Gene?

A study recently published in PLOS one (Catechol-O-Methyltransferase val158met Polymorphism Predicts Placebo Effect in Irritable Bowel Syndrome) purports to have found a gene variant that correlates strongly with a placebo response in irritable bowel syndrome (IBS). The study is small and preliminary, but the results are interesting and do raise important questions about placebo responses.

Researchers are increasingly trying to tease apart the various components of “the placebo effect.” In reality we should use the term “placebo effects” as it is demonstrably multifactorial. “The placebo effect” really refers to whatever is measured in the placebo arm of a clinical trial – everything other than a physiological response to an active intervention. Within that measured response there are many potential factors that would cause an outcome from a fake treatment to be different from no treatment at all. These include statistical effects like regression to the mean and the natural course of symptoms and illness, reporting bias on the part of the subject, and a non-specific response to the therapeutic interaction with the practitioner.

It is also critical to realize that placebo responses vary greatly depending on the disease or symptom that is being treated and the outcome that is being measured. Placebo response is greatest for subjective symptoms of conditions that are known to be modified by things like mood and attention, while it is virtually non-existent for objective outcomes in pathological conditions. So there is a substantial placebo response for pain and nausea, but nothing significant for cancer survival.

(more…)

Posted in: Clinical Trials

Leave a Comment (7) →

Chinese Systematic Reviews of Acupuncture

I’ll begin with the possibly shocking admission that I’m a strong supporter of the collection of ideas and techniques known as evidence-based medicine (EBM). I’m even the current President of the Evidence-Based Veterinary Medicine Association (EBVMA). This may seem a bit heretical in this context, since EBM  takes a lot of heat in this blog. But as Dr. Atwood has said, “we at SBM are in total agreement…that EBM “should not be without consideration of prior probability, laws of physics, or plain common sense,” and that SBM and EBM should not only be mutually inclusive, they should be synonymous.” So I have hope that by emphasizing the distinction between SBM and EBM and the limitations of EBM, we can engender the kind of changes in approach needed to address those limitations and eliminate the need for the distinction. One way of doing this is to critically evaluate the misuses of EBM in support of alternative therapies.

One of the highest levels of evidence in the hierarchy of evidence-based medicine is the systematic review. Unlike narrative reviews, in which an author selects those studies they consider relevant and then summarizes what they think the studies mean, which is a process subject to a high risk of bias, a systematic review identifies randomized controlled clinical trials according to an explicit and objective set of criteria established ahead of time. Predetermined criteria are also used to grade the studies evaluated by quality so any relationship between how well studies are conducted and the results can be identified. Done well, a systematic review gives a good sense of the balance of the evidence for a specific medical question.

Unfortunately, poorly done systematic reviews can create an strong but inaccurate impression that there is high-level, high-quality evidence in favor of a hypothesis when there really isn’t. Reviews of acupuncture research illustrate this quite well.

(more…)

Posted in: Acupuncture, Clinical Trials

Leave a Comment (28) →

Mouse “avatars”: New predictors of response to chemotherapy?

Over the years, I’ve written a lot about “personalized medicine, mainly in the context of how the breakthroughs in genomic medicine and data pouring in from the Cancer Genome Atlas is providing the raw information necessary for developing truly personalized cancer therapy. The problem, of course, is analyzing it and figuring out how to apply it. Another problem, of course, is developing the necessary targeted drugs to attack the pathways that are identified as being dysregulated in cancer cells. Oh, and there’s that pesky evolution of resistance to antitumor therapies. Indeed, most recently, the Cancer Genome Atlas is bearing fruit in breast cancer (a study that I’ve been meaning to blog about).

One problem with modeling the pathways based on next generation sequencing data and expression profiling is testing whether therapies predicted to work from these analyses actually do work without actually testing potentially toxic drugs on patients. Cell culture is notoriously unreliable as a predictor. However, there is another way that’s intriguing. Unfortunately, as intriguing as it is, it has numerous problems, and, unfortunately, it’s being prematurely marketed to patients. Although I had heard of this technique as a research tool before, I learned about its marketing to patients when I came across an article by Andrew Pollack in the New York Times entitled Seeking Cures, Patients Enlist Mice Stand-Ins. Basically, it’s about a trend in science and among patients to use custom, “personalized’ mouse xenograft models in order to do “personalized” therapy:
(more…)

Posted in: Basic Science, Cancer, Clinical Trials

Leave a Comment (8) →
Page 10 of 31 «...89101112...»