Articles

Posts Tagged Clinical Trials

Stanislaw Burzynski: A deceptive propaganda movie versus an upcoming news report

Well, I’ve finally seen it, and it was even worse than I had feared.

After having heard of Eric Merola’s plan to make a sequel to his 2010 propaganda “documentary” about Stanislaw Burzynski, Burzynski The Movie: Cancer Is Serious Business, which I labeled a bad movie, bad medicine, and bad PR, I’ve finally actually seen the finished product, such as it is. Of course, during the months between when Eric Merola first offered me an “opportunity” to appear in the sequel based on my intense criticism of Burzynski’s science, abuse of the clinical trials process, and human subjects research ethics during the last 18 months or so, there has been intense speculation about what this movie would contain, particularly given how Merola’s publicity campaign involved demonizing skeptics, now rechristened by Merola as “The Skeptics,” a shadowy cabal of people apparently dedicated (according to Merola) to protecting big pharma and making sure that patients with deadly cancers don’t have access to Burzynski’s magic peptides, presumably cackling all the way to the bank to cash those big pharma checks.
(more…)

Posted in: Cancer, Clinical Trials, Science and the Media

Leave a Comment (71) →

Dr. Stanislaw Burzynski’s antineoplastons versus patients

Prelude: Doin’ the Antineoplaston Boogaloo with Eric Merola and Stanislaw Burzynski

In December I noted that Eric Merola, the “film maker” (and, given the quality of his work, I do use that term loosely) who was responsible for a movie that was such blatant propaganda that it would make Leni Riefenstahl blush were she still alive (Burzynski The Movie: Cancer Is Serious Business, in case anyone’s interested), was planning on releasing another propaganda “documentary” about Stanislaw Burzynski later this year. Merola decided to call it Burzynski: Cancer Is Serious Business, Chapter 2 | A Modern Story. Wondering what it is with Merola and the multiple subtitles, I had been hoping he would call the Burzynski sequel something like Burzynski The Movie II: This Time It’s Peer-Reviewed (except that it’s still not, not really, and I can’t take credit for that joke, as much as I wish I could) or Burzynski The Movie II: Even Burzynskier Than The First, or Burzynski The Movie II: Burzynski Harder. Mercifully, I doubt even Merola would call the film Burzynski II: Antineoplaston Boogaloo. (If you don’t get this last joke because you are either not from the US or are too young to remember, check out the Urban Dictionary.)

In any case, Merola named the sequel what he named it, and we can all look forward to yet another propaganda film chock full of conspiracy theories in which the FDA, Texas Medical Board, National Cancer Institute, and, for all I know, the CIA, FBI, and NSA are all out to get Merola’s heroic “brave maverick doctor,” along with a website full of a “sourced transcript” to be used by Burzynski minions and shills everywhere to attack any skeptic who dares to speak out. The only good thing about it, if you can call it that, is that I’m guaranteed material for at least one juicy blog post, at least as long as I can find a copy of Burzysnki II online, as I was able to do with Burzynski I, thanks to Mike Adams at NaturalNews.com and other “alternative sites” that were allowed to show the whole movie for a week or so before folks like Joe Mercola were allowed to feature the complete film on their websites indefinitely.

Maybe Eric Merola will send me a DVD review copy when the movie is released. Or maybe not.
(more…)

Posted in: Cancer, Clinical Trials

Leave a Comment (30) →

It’s time for true transparency of clinical trials data

What makes a health professional science-based? We advocate for evaluations of treatments, and treatment decisions, based on the best research methods. We compile evidence based on fair trials that minimize the risks of bias. And, importantly, we consider this evidence in the context of the plausibility of the treatment. The fact is, it’s actually not that hard to get a positive result in a trial, especially when it’s sloppily done or biased.  And there are many ways to design a trial to demonstrate positive results in some subgroup, as Kimball Atwood pointed out earlier this week. And even when a trial is well done, there remains the risk of error simply due to chance alone. So to sort out true treatment effects, from fake effects, two key steps are helpful in reviewing the evidence.

1. Take prior probability into account when assessing data. While a detailed explanation of Bayes Theorem could take several posts, consider prior probability this way: Any test has flaws and limitations. Tests give probabilities based on the test method itself, not on what is being tested. Consequently, in order to evaluate the probability of “x” given a test result, we must incorporate the pre-test probability of “x”. Bayesian analysis uses any existing data, plus the data collected in the test, to give a prediction that factors in prior probabilities. It’s part of the reason why most published research findings are false.

2. Use systematic reviews to evaluate all the evidence. The best way to answer a specific clinical question is to collect all the potentially relevant information in a structured way, consider its quality, analyze it according to predetermined criteria, and then draw conclusions. A systematic review reduces the risk of cherry picking and author bias, compared to non-systematic data-collection or general literature reviews of evidence. A well-conducted systematic review will give us an answer based on the totality of evidence available, and is the best possible answer for a given question.

These two steps are critically important, and so have been discussed repeatedly by the contributors to this blog. What is obvious, but perhaps not as well understood, is how our reviews can still be significantly flawed, despite best efforts. In order for our evaluation to accurately consider prior probability, and to be systematic, we need all the evidence. Unfortunately, that’s not always possible if clinical trials remains unpublished or are otherwise inaccessible. There is good evidence to show that negative studies are less likely to be published than positive studies. Sometimes called the “file drawer” effect, it’s not solely the fault of investigators, as journals seeking positive results may decline to publish negative studies. But unless these studies are found, systematic reviews are more likely to miss negative data, which means there’s the risk of bias in favor of an intervention. How bad is the problem? We really have no complete way to know, for any particular clinical question, just how much is missing or buried. This is a problem that has confounded researchers and authors of systematic reviews for decades. (more…)

Posted in: Clinical Trials, Politics and Regulation

Leave a Comment (18) →

News flash! Doctors aren’t all compliant pharma drones!

There’s an oft-quoted saying that’s become a bit of a cliché among skeptics that goes something like this: There are two kinds of medicine: medicine that’s been proven scientifically to work, and medicine that hasn’t. This is then often followed up with a rhetorical question and its answer: What do call “alternative medicine” that’s been proven to work? Medicine. Of course, being the kind of guy that I am, I have to make it a bit more complicated than that while driving home in essence the same message. In my hands, the way this argument goes is that the whole concept of “alternative” medicine is a false dichotomy. There is no such thing. In reality, there are three kinds of medicine: Medicine that has been shown to efficacious and safe (i.e., shown to work); medicine that has not yet been shown to work (i.e., that is unproven); and medicine that has been shown not to work (i.e., that is disproven). So-called “complementary and alternative medicine” (CAM or, its newer, shinier name, “integrative medicine”) consists almost completely of the latter two categories.

Part of the reason why this saying and its variants have become so commonplace among those of us who support science-based medicine is that they strike at a common truth about medicine, both science-based and “alternative.” That common truth is what we here at SBM have been arguing since the very inception of this blog, namely that there must be one science-based standard of evidence for all treatments, be they “alternative” or the latest creation of big pharma. That point informs everything I write here and everything my blogging parters in crime write about too. What that means is a single, clear set of standards for evaluating medical evidence, in which clinical evidence is coupled to basic science and scientific plausibility. Indeed, one of our main complaints against CAM and its supporters has been how they invoke a double standard, in which they expect their therapies to be accepted as “working” on the basis of a much lower standard of evidence. Indeed, when they see high quality clinical trials demonstrating that, for example, acupuncture doesn’t work, they will frequently advocate the use of “pragmatic” trials, lower quality trials of “real world effectiveness” that do not adequately control for placebo effects. It’s putting the cart before the horse.
(more…)

Posted in: Clinical Trials, Pharmaceuticals, Politics and Regulation

Leave a Comment (205) →

Related by coincidence only? University and medical journal press releases versus journal articles

There are certain topics in Science-Based Medicine (or, in this case, considering the difference between SBM and quackery) that keep recurring over and over. One of these, which is of particular interest to me because I am a cancer surgeon specializing in breast cancer, is the issue of alternative medicine use for cancer therapy. Yesterday, I posted a link to an interview that I did for Uprising Radio that aired on KPFK 90.7 Los Angeles. My original intent was to do a followup post about how that interview came about and to discuss the Gerson therapy, a particularly pernicious and persistent form of quackery. However, it occurred to me as I began to write the article that it would be better to wait a week. The reason is that part of how this interview came about involved three movies, one of which I’ve seen and reviewed before, two of which I have not. In other words, there appears to be a concerted effort to promote the Gerson therapy more than ever before, and it seems to be bearing fruit. In order to give you, our readers, the best discussion possible, I felt it was essential to watch the other two movies. So discussion of the Gerson protocol will have to wait a week or two.

In the meantime, there’s something else that’s been eating me. Whether it’s confirmation bias or something else, whenever something’s been bugging me it’s usually not long before I find a paper or online source to discuss it. In this case, it’s the issue of why scientific studies are reported so badly in the press. It’s a common theme, one that’s popped upon SBM time and time again. Why are medical and scientific studies reported so badly in the lay press? Some would argue that it has something to do with the decline of old-fashioned dead tree media. With content all moving online and newspapers, magazines, and other media are struggling to find a way to provide content (which Internet users have come to expect to be free online) and still make a profit. The result has been the decline of specialized journalists, such as science and medical writers. That’s too easy of an answer, though. As is usually the case, things are a bit more complicated. More importantly, we in academia need to take our share of the blame. A few months ago, Lisa Schwartz and colleagues (the same Lisa Schwartz who with Steven Woloshin at Dartmouth University co-authored an editorial criticizing the Susan G. Komen Foundation for having used an inappropriate measure in one of its ads) actually attempted to look at how much we as an academic community might be responsible for bad reporting of new scientific findings by examining the relationship between the quality of press releases issued by medical journals to describe research findings by their physicians and scientists and the subsequent media reports of those very same findings. The CliffsNotes version of their findings is that we have a problem in academia, and our hands are not entirely clean of the taint of misleading and exaggerated reporting. The version as reported by Schwartz et al in their article published in BMJ entitled Influence of medical journal press releases on the quality of associated newspaper coverage: retrospective cohort study. It’s an article I can’t believe I missed when it came out earlier this year.
(more…)

Posted in: Clinical Trials, Medical Academia, Science and the Media

Leave a Comment (11) →

Meet the new drugs, same as the old drugs?

“Targeted therapy.” It’s the holy grail of cancer research these days. If you listen to its most vocal proponents, it’s the path towards “personalized medicine” that improves survival with much lower toxicity. With the advent of the revolution in genomics that has transformed cancer research over the last decade, including the petabytes of sequence and gene expression data that pour out of universities and research institutes, the promise of one day being able to a patient’s tumor, determining the specific derangements in genome and gene expression that drive its uncontrolled proliferation, and finding drugs to target these abnormalities seems more tantalizingly close than ever. Indeed, it seems so close that even dubious practitioners, such as Stanislaw Burzynski, have jumped on the bandwagon, co-opting the terms used by real oncologists and real cancer researchers to sell “personalized gene-targeted cancer therapy,” which in their hands are really no more than a parody of efforts to synthesize the enormous quantity of genomic data each patient’s tumor possesses and figure out how best to take advantage of it, a “personalized genomic therapy for dummies,” if you will.

That’s not to say that there aren’t roadblocks to realizing this vision. The problems to be overcome are substantial, and I’ve discussed them multiple times before. For example, just a couple of weeks ago I discussed an example of just what it takes to apply these new genomic techniques to an individual patient. The resources required are staggering, and, more problematic, there often aren’t any single “magic bullet” molecular pathways identified that can be targeted with existing drugs. The case I discussed was a fortunate man indeed in that such a pathway was identified, but most tumors are driven by many derangements in growth control, metabolism, migration, and the other hallmarks of malignancy described by Robert Weinberg. Worse, in many cases we don’t even have drugs that can attack many of the abnormalities that drive cancer progression. Then there’s the issue of tumor heterogeneity, which comes about because cancer is as good example of a disease as I can think of in which evolution due to natural selection results in incredible differences in the cancer cells in one part of the tumor compared to other parts of the tumor or in the tumor metastases. A “targeted” therapy that targets the genetic abnormalities in one part of the cancer might well fail to target the genetic abnormalities driving another part of the tumor.

These, and many other reasons, are why we haven’t “cured cancer” yet.
(more…)

Posted in: Cancer, Clinical Trials

Leave a Comment (5) →

The problem with preclinical research? Or: A former pharma exec discovers the nature of science

If there’s one thing about quacks, it’s that they are profoundly hostile to science. Actually, they have a seriously mixed up view of science in that they hate it because it doesn’t support what they believe. Yet at the same time they very much crave the imprimatur that science provides. When science tells them they are wrong, they therefore often try to attack the scientific method itself or claim that they are the true scientists. We see this behavior not just in quackery but any time scientific findings collide with entrenched belief systems, for example, medicine, evolution, anthropogenic global warming, and many others. So it was not surprising that a rant I saw a few weeks ago by a well-known supporter of pseudoscience who blogs under the pseudonym of Vox Day caught my interest. Basically, he saw a news report about an article in Nature condemning the quality of current preclinical research. From it, he draws exactly the wrong conclusions about what this article means for medical science:

Fascinating. That’s an 88.6 percent unreliability rate for landmark, gold-standard science. Imagine how bad it is in the stuff that is only peer-reviewed and isn’t even theoretically replicable, like evolutionary biology. Keep that figure in mind the next time some secularist is claiming that we should structure society around scientific technocracy; they are arguing for the foundation of society upon something that has a reliability rate of 11 percent.

Now, I’ve noted previously that atheists often attempt to compare ideal science with real theology and noted that in a fair comparison, ideal theology trumps ideal science. But as we gather more evidence about the true reliability of science, it is becoming increasingly obvious that real theology also trumps real science. The selling point of science is supposed to be its replicability… so what is the value of science that cannot be repeated?

No, a problem with science as it is carried out by scientists in the real world doesn’t mean that religion is true or that a crank like Vox is somehow the “real” intellectual defender of science. Later, Vox doubles down on his misunderstanding by trying to argue that the problem in this article means that science is not, in fact, “self-correcting.” This is, of course, nonsense in that the very article Vox is touting is an example of science trying to correct itself. Be that at it may, none of this is surprising, given that Vox has demonstrated considerable crank magnetism, being antivaccine, anti-evolution, an anthropogenic global warming denialist, and just in general anti-science, but he’s not alone. Quackery supporters of all stripes are jumping on the bandwagon to imply that this study somehow “proves” that the scientific basis of medicine is invalid. A writer at Mike Adams’ wretched hive of scum and quackery, NaturalNews.com, crows:

Begley says he cannot publish the names of the studies whose findings are false. But since it is now apparent that the vast majority of them are invalid, it only follows that the vast majority of modern approaches to cancer treatment are also invalid.

But does this study show this? I must admit that it was a topic of conversation at the recent AACR meeting, given that the article was published shortly before the meeting. It’s also been a topic of e-mail conversations and debates at my very own institution. But do the findings reported in this article mean that the scientific basis of cancer treatment is so off-base that quackery of the sort championed by Mike Adams is a viable alternative or that science-based medicine is irrevocably broken?

Not so fast there, pardner…
(more…)

Posted in: Basic Science, Cancer, Clinical Trials

Leave a Comment (16) →

Revisiting Daniel Moerman and “placebo effects”

About three weeks ago, ironically enough, right around the time of TAM 9, the New England Journal of Medicine (NEJM) inadvertently provided us in the form of a new study on asthma and placebo effects not only material for our discussion panel on placebo effects but material for multiple posts, including one by me, one by Kimball Atwood, and one by Peter Lipson, the latter two of whom tried to point out that the sorts of uses of these results could result in patients dying. Meanwhile, Mark Crislip, in his ever-inimitable fashion, discussed the study as well, using it to liken complementary and alternative medicine (CAM) as the “beer goggles of medicine,” a line I totally plan on stealing. The study itself, we all agreed, was actually pretty well done. What it showed is that in asthma a patient’s subjective assessment of how well he’s doing is a poor guide to how well his lungs are actually doing from an objective, functional standpoint. For the most part, the authors came to this conclusion as well, although their hedging and hawing over their results made almost palpable their disappointment that their chosen placebos utterly failed to produce anything resembling an objective response improving lung function as measured by changes (or lack thereof) in FEV1.

In actuality, where most of our criticism landed, and landed hard—deservedly, in my opinion—was on the accompanying editorial, written by Dr. Daniel Moerman, an emeritus professor of anthropology at the University of Michigan-Dearborn. There was a time when I thought that anthropologists might have a lot to tell us about how we practice medicine, and maybe they actually do. Unfortunately, my opinion in this matter has been considerably soured by much of what I’ve read when anthropologists try to dabble in medicine. Recently, I became aware that Moerman appeared on the Clinical Conversations podcast around the time his editorial was published, and, even though the podcast is less than 18 minutes long, Moerman’s appearance in the podcast provides a rich vein of material to mine regarding what, exactly, placebo effects are or are not, not to mention evidence that Dr. Moerman appears to like to make like Humpty-Dumpty in this passage:
(more…)

Posted in: Acupuncture, Basic Science, Clinical Trials, Neuroscience/Mental Health, Science and the Media

Leave a Comment (35) →

Lies, damned lies, and…science-based medicine?

I realize that in the question-and-answer session after my talk at the Lorne Trottier Public Science Symposium a week ago I suggested in response to a man named Leon Maliniak, who monopolized the first part of what was already a too-brief Q&A session by expounding on the supposed genius of Royal Rife, that I would be doing a post about the Rife Machine soon. And so I probably will; such a post is long overdue at this blog, and I’m surprised that no one’s done one after nearly three years. However, as I arrived back home in the Detroit area Tuesday evening, I was greeted by an article that, I believe, requires a timely response. (No, it wasn’t this article, although responding to it might be amusing even though it’s a rant against me based on a post that is two and a half years old.) Rather, this time around, the article is in the most recent issue of The Atlantic and on the surface appears to be yet another indictment of science-based medicine, this time in the form of a hagiography of Greek researcher John Ioannidis. The article, trumpeted by Tara Parker-Pope, comes under the heading of “Brave Thinkers” and is entitled Lies, Damned Lies, and Medical Science. It is being promoted in news stories like this, where the story is spun as indicating that medical science is so flawed that even the cell-phone cancer data can’t be trusted:

Visit msnbc.com for breaking news, world news, and news about the economy

Let me mention two things before I delve into the meat of the article. First, these days I’m not nearly as enamored of The Atlantic as I used to be. I was a long-time subscriber (at least 20 years) until last fall, when The Atlantic published an article so egregiously bad on the H1N1 vaccine that our very own Mark Crislip decided to annotate it in his own inimitable fashion. That article was so awful that I decided not to renew my subscription; it is to my shame that I didn’t find the time to write a letter to The Atlantic explaining why. Fortunately, this article isn’t as bad (it’s a mixed bag, actually, making some good points and then undermining some of them by overreaching), although it does lay on the praise for Ioannidis and the attacks on SBM a bit thick. Be that as it may, clearly The Atlantic has developed a penchant for “brave maverick doctors” and using them to cast doubt on science-based medicine. Second, I actually happen to love John Ioannidis’ work, so much so that I’ve written about it at least twice over the last three years, including The life cycle of translational research and Does popularity lead to unreliability in scientific research?, where I introduced the topic using Ioannidis’ work. Indeed, I find nothing at all threatening to me as an advocate of science-based medicine in Ioannidis’ two most famous papers, Contradicted and Initially Stronger Effects in Highly Cited Clinical Research and Why Most Published Research Findings Are False. The conclusions of these papers to me are akin to concluding that water is wet and everybody dies. It is, however, quite good that Ioannidis is there to spell out these difficulties with SBM, because he tries to keep us honest.
(more…)

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (114) →

The continuum of surgical research in science-based medicine

Editor’s note: Three members of the SBM blogging crew had a…very interesting meeting on Friday, one none of us expected, the details of which will be reported later this week–meaning you’d better keep reading this week if you want to find out. (Hint, hint.) However, what that means is that I was away Thursday and Friday; between the trip and the various family gatherings I didn’t have time for one of my usual 4,000 word screeds of fresh material. However, there is something I’ve been meaning to discuss on SBM, and it’s perfect for SBM. Fortunately, I did write something about it elsewhere three years ago. This seems like the perfect time to spiff it up, update it, and republish it. In doing so, I found myself writing far more than I had expected, making it a lot more different from the old post than I had expected, but I guess that’s just me.

In the meantime, the hunt for new bloggers goes on, with some promising results. If we haven’t gotten back to you yet (namely most of you), please be patient. This meeting and the holiday–not to mention my real life job–have interfered with that, too.

The continuum of surgical research in science-based medicine

One of the things about science-based medicine that makes it so fascinating is that it encompasses such a wide variety of modalities that it takes a similarly wide variety of science and scientific techniques to investigate various diseases. Some medical disciplines consist of mainly of problems that are relatively straightforward to study. Don’t get me wrong, though. By “straightforward,” I don’t mean that they’re easy, simply that the experimental design of a clinical trial to test a treatment is fairly easily encompassed by the paradigm of randomized clinical trials. Medical oncology is just one example, where new drugs can be tested in randomized, double-blinded trials against or in addition to the standard of care without having to account for many difficulties that arise from difficulties blinding. We’ve discussed such difficulties before, for instance, in the context of constructing adequate placebos for acupuncture trials. Indeed, this topic is critical to the application of science-based medicine to various “complementary and alternative medicine” modalities, which do not as easily lend themselves to randomized double-blind placebo-controlled trials, although I would hasten to point out that, just because it can be very difficult to do such trials is not an excuse for not doing them. The development of various “sham acupuncture” controls, one of which consisted even of just twirling a toothpick gently poked onto the skin, shows that.

One area of medicine where it is difficult to construct randomized controlled trials is surgery. The reasons are multiple. For one thing, it’s virtually impossible to blind the person doing the surgery to what he or she is doing. One way around that would be to have the surgeons who do the operations not be involved with the postoperative care of the patients at all, while the postoperative team doesn’t know which operation the patient actually got. However, most surgeons would consider this not only undesirable, but downright unethical. At least, I would. Another problem comes when the surgeries are sufficiently different that it is impossible to hide from the patient which operation he got. Moreover, surgery itself has a powerful placebo effect, as has been shown time and time again. Even so, surgical trials are very important and produce important results. For instance, I wrote about two trials for vertebral kyphoplasty for ostoporotic fractures, both of which produced negative results showing kyphoplasty to be no better than placebo. Some surgical trials have been critical to defining a science-based approach to how we treat patients, such as trials showing that survival rates are the same in breast cancer treated with lumpectomy and radiation therapy as they are when the treatment is mastectomy. Still, surgery is a set of disciplines where applying science-based medicine is arguably not as straightforward as it is in many specialties. At times, applying science-based medicine to it can be nearly as difficult as it is to do for various CAM modalities, mainly because of the difficulties in blinding. That’s why I’m always fascinated by strategies by which we as surgeons try to make our discipline more science-based.
(more…)

Posted in: Clinical Trials, Science and Medicine, Surgical Procedures

Leave a Comment (15) →
Page 2 of 3 123