Archive for Clinical Trials

Ethics in human experimentation in science-based medicine

Science-based medicine depends upon human experimentation. Scientists can do the most fantastic translational research in the world, starting with elegant hypotheses, tested through in vitro and biochemical experiments, after which they are tested in animals. They can understand disease mechanisms to the individual amino acid level in a protein or nucleotide in a DNA molecule. However, without human testing, they will never know if the end results of all that elegant science will actually do what it is intended to do and to make real human patients better. They will never know if the fruits of all that labor will actually cure disease. However, it is in human experimentation where the ethics of science most tend to clash with the mechanisms of science. We refer to “science-based medicine” (SBM) as “based” in science, but not science, largely because medicine can never be pure science. Science has resulted in amazing medical advances over the last century, but if there is one thing that we have learned it’s that, because clinical trials involve living, breathing, fellow human beings, what is the most scientifically rigorous trial design might not be the most ethical.

About a week ago, the AP reported that experiments and clinical trials that resemble the infamous Tuskegee syphilis study and the less well known, but recently revealed Guatemala syphilis experiment were far more common than we might like to admit. As I sat through talks about clinical trial results at the Society of Surgical Oncology meeting in San Antonio over the weekend, the revelations of the last week reminded me that the intersection between science and ethics in medicine can frequently be a very tough question indeed. In fact, in many of the discussions, questions of what could or could not be done based on ethics were frequently mentioned, such as whether it is ethically acceptable or possible to do certain followup trials to famous breast cancer clinical trials. Unfortunately, it was not so long ago that such questions were answered in ways that bring shame on the medical profession.

Posted in: Clinical Trials, Medical Ethics, Pharmaceuticals, Science and the Media

Leave a Comment (19) →

Of SBM and EBM Redux. Part IV, Continued: More Cochrane and a little Bayes

OK, I admit that I pulled a fast one. I never finished the last post as promised, so here it is.

Cochrane Continued

In the last post I alluded to the 2006 Cochrane Laetrile review, the conclusion of which was:

This systematic review has clearly identified the need for randomised or controlled clinical trials assessing the effectiveness of Laetrile or amygdalin for cancer treatment.

I’d previously asserted that this conclusion “stand[s] the rationale for RCTs on its head,” because a rigorous, disconfirming case series had long ago put the matter to rest. Later I reported that Edzard Ernst, one of the Cochrane authors, had changed his mind, writing, “Would I argue for more Laetrile studies? NO.” That in itself is a reason for optimism, but Dr. Ernst is such an exception among “CAM” researchers that it almost seemed not to count.

Until recently, however, I’d only seen the abstract of the Cochrane Laetrile review. Now I’ve read the entire review, and there’s a very pleasant surprise in it (Professor Simon, take notice). In a section labeled “Feedback” is this letter from another Cochrane reviewer, which was apparently added in August of 2006, well before I voiced my own objections:


Posted in: Clinical Trials, Homeopathy, Medical Academia, Science and Medicine

Leave a Comment (63) →

Critique of “Risk of Brain Tumors from Wireless Phone Use”

Following my recent critique here of the book Disconnect by Devra Davis, about the purported dangers of cell phones to health, David Gorski asked me to comment on a recently published “review article” on the same subject. The article is entitled “Risk of Brain Tumors from Wireless Phone Use” by Dubey et al [1] published in the J. Comput Assist Tomography. At the outset, the same question occurred to both of us: what is a “review article” about cell phones and brain tumors doing in a highly technical journal dedicated to CT scans and CT imaging? While we are both still guessing about the answer to this question, we agreed that the article itself is a hodge-podge of irrational analysis.

As you might surmise, Dubey and his Indian co-authors come to the conclusion that “that the current standard of exposure to microwave during mobile phone use is not safe for long-term exposure and needs to be revised.” But within the conclusion there is also the following: “There is no credible evidence from the Environmental Health and Safety Office (I presume in India) about the cause of cancer or brain tumors with the use of cell phones. It is illogical to believe that evidence of unusual brain tumors is only because of hundred’s of millions of people using cell phones worldwide.” What?! These are opposite and contradictory statements. The main body of the article includes a lot more instances of such inconsistency.


Posted in: Clinical Trials, Epidemiology, Science and Medicine

Leave a Comment (14) →

Ear Infections: To Treat or Not to Treat

Ear infections used to be a devastating problem. In 1932, acute otitis media (AOM) and its suppurative complications accounted for 27% of all pediatric admissions to Bellevue Hospital. Since the introduction of antibiotics, it has become a much less serious problem. For decades it was taken for granted that all children with AOM should be given antibiotics, not only to treat the disease itself but to prevent complications like mastoiditis and meningitis.

In the 1980s, that consensus began to change. We realized that as many as 80% of uncomplicated ear infections resolve without treatment in 3 days. Many infections are caused by viruses that don’t respond to antibiotics. Overuse of antibiotics leads to the emergence of resistant strains of bacteria. Antibiotics cause side effects. A new strategy of watchful waiting was developed.


Posted in: Clinical Trials, Pharmaceuticals

Leave a Comment (42) →

The NCCAM Strategic Plan 2011-2015: The Good, The Bad, and The Ugly

As hard as it is to believe, it’s been nearly a year since Steve Novella, Kimball Atwood, and I were invited to meet with the director of the National Center for Complementary and Alternative Medicine (NCCAM), Dr. Josephine Briggs. Depending upon the day, sometimes it seems like just yesterday; sometimes it seems like ancient history. For more details, read Steve’s account of our visit, but the CliffsNotes version is that we had a pleasant conversation in which we discussed our objections to how NCCAM funds dubious science and advocacy of complementary and alternative medicine (CAM). When we left the NIH campus, our impression was that Dr. Briggs is well-meaning and dedicated to increasing the scientific rigor of NCCAM studies but doesn’t understand the depths of pseudoscience that constitute much of what passes for CAM. We were also somewhat optimistic that we had at least managed to communicate some of our most pressing practical concerns, chief among which is the anti-vaccine bent of so much of CAM and how we hoped that NCCAM would at least combat some of that on its website.

Looking at the NCCAM website, I see no evidence that there has been any move to combat the anti-vaccine tendencies of CAM by posting pro-vaccination pieces or articles refuting common anti-vaccine misinformation. Of all the topics we discussed, it was clearest that everyone, including Dr. Briggs, agreed that the NCCAM can’t be perceived as supporting anti-vaccine viewpoints, and although it doesn’t explicitly do so, neither does it do much to combat the anti-vaccine viewpoints so ingrained in CAM. As far as I’m concerned, I’m with Kimball in asserting that NCCAM’s silence on the matter is in effect tacit approval of anti-vaccine viewpoints. Be that as it may, not long afterward, Dr. Briggs revealed that she had met with homeopaths around the same time she had met with us, suggesting that we were simply brought in so that she could say she had met with “both sides.” Later, she gave a talk to the 25th Anniversary Convention of the American Association of Naturopathic Physicians (AANP), which is truly a bastion of pseudoscience.

In other words, I couldn’t help but get the sinking feeling that we had been played. Not that we weren’t mildly suspicious when we traveled to Bethesda, but from our perspective we really didn’t have a choice: if we were serious about our mission to promote science-based medicine, Dr. Briggs’ was truly an offer we could not refuse. We had to go. Period. I can’t speak for Steve or Kimball, but I was excited to go as well. Never in my wildest dreams had it occurred to me that the director of NCCAM would even notice what we were writing, much less take it seriously enough to invite us out for a visit. I bring all this up because last week NCCAM did something that might provide an indication of whether it’s changed, whether Dr. Briggs has truly embraced the idea that rigorous science should infuse NCCAM and all that it does, let the chips fall where they may. Last week, NCCAM released its five year strategic plan for 2011 to 2015.

Truly, it’s a case of The Good, The Bad, and The Ugly.

Posted in: Basic Science, Clinical Trials, Politics and Regulation

Leave a Comment (54) →

Of SBM and EBM Redux. Part IV: More Cochrane and a little Bayes

NB: This is a partial posting; I was up all night ‘on-call’ and too tired to continue. I’ll post the rest of the essay later…


This is the fourth and final part of a series-within-a-series* inspired by statistician Steve Simon. Professor Simon had challenged the view, held by several bloggers here at SBM, that Evidence-Based Medicine (EBM) has been mostly inadequate to the task of reaching definitive conclusions about highly implausible medical claims. In Part I, I reiterated a fundamental problem with EBM, reflected in its Levels of Evidence scheme, that although it correctly recognizes basic science and other pre-clinical evidence as insufficient bases for introducing novel treatments into practice, it fails to acknowledge that they are necessary bases. I explained the difference between “plausibility” and “knowing the mechanism.”

I showed, with several examples, that in the EBM lexicon the word “evidence” refers almost exclusively to the results of clinical trials: thus, when faced with equivocal or no clinical trials of some highly implausible claim, EBM practitioners typically declare that there is “not enough evidence” to either accept or reject the claim, and call for more trials—although in many cases there is abundant evidence, other than clinical trials, that conclusively refutes the claim. I rejected Prof. Simon’s assertion that we at SBM want to “give (EBM) a new label,” making the point that we only want it to live up to its current label by considering all the evidence. I doubted Prof. Simon’s contention that “people within EBM (are) working both formally and informally to replace the rigid hierarchy with something that places each research study in context.”

In Part II I responded to the widely held assertion, also held by Prof. Simon, that there is “societal value in testing (highly implausible) therapies that are in wide use.” I made it clear that I don’t oppose simple tests of basic claims, such as the Emily Rosa experiment, but I noted that EBM reviewers, including those employed by the Cochrane Collaboration, typically ignore such tests. I wrote that I oppose large efficacy trials and public funding of such trials. I argued that the popularity gambit has resulted in human subjects being exposed to dangerous and unethical trials, and I quoted language from ethics treatises specifically contradicting the assertion that popularity justifies such trials. Finally, I showed that the alleged popularity of most “CAM” methods—as irrelevant as it may be to the question of human studies ethics—has been greatly exaggerated.


Posted in: Clinical Trials, Energy Medicine, Faith Healing & Spirituality, Medical Academia, Medical Ethics, Science and Medicine

Leave a Comment (5) →

Rambling Musings on Using the Medical Literature

For those who are new to the blog, I am nobody from nowhere. I am a clinician, taking care of patients with infectious diseases at several hospitals in the Portland area. I am not part of an academic center (although we are affiliated with OHSU and have a medicine residency program). I have not done any research since I was a fellow, 20 years ago. I was an excellent example of the Peter Principal; there was no bench experiment that I could not screw up.

My principal weapon in patient care is the medical literature, accessed throughout the day thanks to Google and PubMed. The medical literature is enormous. There are more than 21,000,000 articles referenced on Pubmed, over a million if the search term ‘infection’ is used, with 45,000 last year.

I probably read as much of the ID literature as any specialist. Preparing for my Puscast podcast, I skim several hundred titles every two weeks, usually select around 80 references of interest and read most of them with varying degrees of depth. Yet I am still sipping at a fire hose of information

The old definition of a specialist is someone who knows more and more about less and less until they everything about nothing. I often feel I know less and less about more and more until someday I will know nothing about everything. Yet I am considered knowledgeable by the American Board of Internal Medicine (ABIM), who wasted huge amounts of my time, a serious chunk of my cash, and who have declared, after years of testing, that I am recertified in my specialty. I am still Board Certified, but the nearly pointless exercise has left me certified bored. But I can rant for hours on Bored Certification and how out of touch with the practice of medicine the ABIM is.


Posted in: Clinical Trials, Science and Medicine

Leave a Comment (14) →

Molecular breast imaging (MBI): A promising technology oversold in a TED Talk?

Occasionally, there are topics that our readers want — nay, demand — that I cover. This next topic, it turns out, is one of them. It’s a link to a TED Talk. I’m guessing that most of our readers have either viewed (or at least heard of) TED talks. Typically, they are 20-minute talks, with few or no slides, by various experts and thought leaders. Many of them are quite good, although as the TED phenomenon has grown I’ve noticed that, not unexpectedly, the quality of TED Talks has become much more uneven than it once was. Be that as it may, beginning shortly after it was posted, readers of both this blog and my other super-not-so-secret other blog started peppering me with links to a recent TED Talk by Dr. Deborah Rhodes at the Mayo Clinic entitled A tool that finds 3x more breast tumors, and why it’s not available to you.

At first, I resisted.

After all, I’ve written about the issues of screening mammography, the USPSTF guideline changes (here, too), the early detection of cancer (including lead time and length time bias, as well as the Will Rogers effect), and a variety of other topics related to the early detection of breast cancer, such as overdiagnosis and overtreatment. Moreover, to put it bluntly, there really isn’t anything radically new in Dr. Rhodes’ talk, at least not to anyone who’s been in the field of breast cancer for a while. Certainly, there’s no new conceptual breakthrough in breast imaging and screening described. As I will discuss in more depth later in this post, there’s an interesting application of newer, smaller, and more sensitive detectors with a much better spatial resolution. It’s cool technology applied to an old problem in breast cancer, but something radical, new, or ground-breaking? Not so much. What Dr. Rhodes describes in her talk is the sort of device that, when I read about it in a medical journal, produces a reaction along the lines of, “Nice technology. Not ready for prime time. I hope it works out for them, though. Could be good.” So it was with molecular breast imaging (MBI), which is the topic of Dr. Rhodes’ talk. So I continued to resist for about two or three weeks.

Then our very own Harriet Hall sent me the link. I cannot resist Harriet. When she suggests that perhaps I should blog about a topic, it’s rare that my response would be anything other than, “Yes, ma’am. How soon would you like that post and how many words?” I keed, of course, but only just. The best I could come up with was a wishy-washy “But this isn’t really anything all that new,” which is true enough, but the way Dr. Rhodes tried to sell the audience on the idea of her technology brings up a lot of issues important to our audience. I also thought it was important to put this technology in perspective. So here I go. First, I’ll start by describing what really set my teeth on edge about Dr. Rhodes’ talk. Then I’ll go to the primary literature (namely her brand, spankin’ new article in Radiology describing the technology) and discuss the technique itself.

Posted in: Cancer, Clinical Trials, Diagnostic tests & procedures, Medical devices, Science and the Media

Leave a Comment (8) →

Of SBM and EBM Redux. Part III: Parapsychology is the Role Model for “CAM” Research

This is the third post in this series*; please see Part II for a review. Part II offered several arguments against the assertion that it is a good idea to perform efficacy trials of medical claims that have been refuted by basic science or by other, pre-trial evidence. This post will add to those arguments, continuing to identify the inadequacies of the tools of Evidence-Based Medicine (EBM) as applied to such claims.

Prof. Simon Replies

Prior to the posting of Part II, statistician Steve Simon, whose views had been the impetus for this series, posted another article on his blog, responding to Part I of this series. He agreed with some of what both Dr. Gorski and I had written:

The blog post by Dr. Atwood points out a critical distinction between “biologically implausible” and “no known mechanism of action” and I must concede this point. There are certain therapies in CAM that take the claim of biological plausibility to an extreme. It’s not as if those therapies are just implausible. It is that those therapies must posit a mechanism that “would necessarily violate scientific principles that rest on far more solid ground than any number of equivocal, bias-and-error-prone clinical trials could hope to overturn.” Examples of such therapies are homeopathy, energy medicine, chiropractic subluxations, craniosacral rhythms, and coffee enemas.

The Science Based Medicine site would argue that randomized trials for these therapies are never justified. And it bothers Dr. Atwood when a systematic review from the Cochrane Collaboration states that no conclusions can be drawn about homeopathy as a treatment for asthma because of a lack of evidence from well conducted clinical trials. There’s plenty of evidence from basic physics and chemistry that can allow you to draw strong conclusions about whether homeopathy is an effective treatment for asthma. So the Cochrane Collaboration is ignoring this evidence, and worse still, is implicitly (and sometimes explicitly) calling for more research in this area.

On the other hand:

There are a host of issues worth discussing here, but let me limit myself for now to one very basic issue. Is any research justified for a therapy like homeopathy when basic physics and chemistry will provide more than enough evidence by itself to suggest that such research is futile(?) Worse still, the randomized trial is subject to numerous biases that can lead to erroneous conclusions.

I disagree for a variety of reasons.


Posted in: Acupuncture, Clinical Trials, Energy Medicine, Faith Healing & Spirituality, Herbs & Supplements, Homeopathy, Medical Academia, Medical Ethics, Science and Medicine

Leave a Comment (30) →

Placebo effects without deception? Well, not exactly…

In discussing “alternative” medicine it’s impossible not to discuss, at least briefly, placebo effects. Indeed, one of the most common complaints we at SBM voice about clinical trials of alternative medicine is the lack of adequate controls — meaning adequate controls for placebo and nonspecific effects. Just type “acupuncture” in the search box in the upper left hand corner of the blog masthead, and you’ll pull up a number of discussions of acupuncture clinical trials that SBM bloggers have written over the last three years. If you check some of these posts, you’ll find that in nearly every case we spend considerable time and effort discussing whether the placebo or sham control used was adequate, noting that, the better the sham controls, the less likely acupuncture studies are to have a positive result.

Some of the less clueless advocates of “complementary and alternative medicine” (CAM) seem to realize that much of what they do relies on placebo effects. As a result, they tend to argue that what they do is useful and good because it’s “harnessing the placebo effect” for therapeutic purpose. One problem that advocates of SBM (like those of us at SBM who have taken an interest in this topic) tend to have with this argument is that it has always been assumed that a good placebo requires on some level at least some deception of the patient by either saying or implying that he is receiving an active treatment or medicine of some kind. This, we have argued, is a major ethical problem in using placebos in patients, and advocates of placebo medicine appear to agree, because they frequently argue that placebo effects can be harnessed without deception. Indeed, just last week there was an example of this argument plastered all over multiple news outlets and blogs in the form of stories and posts with headlines and titles like:

Except for one, every one of these articles or blog posts discussing a new study in PLoS ONE that purports to have found that placebo effects can be elicited in irritable bowel syndrome (IBS) without deception buys completely into that very thesis. For example, here is an example, taken from the Reuters story about this study:

Placebos can help patients feel better, even if they are fully aware they are taking a sugar pill, researchers reported on Wednesday on an unusual experiment aimed to better understand the “placebo effect.”

Nearly 60 percent of patients with irritable bowel syndrome reported they felt better after knowingly taking placebos twice a day, compared to 35 percent of patients who did not get any new treatment, they report in the Public Library of Science journal PLoS ONE.

“Not only did we make it absolutely clear that these pills had no active ingredient and were made from inert substances, but we actually had ‘placebo’ printed on the bottle,” Ted Kaptchuk of Harvard Medical School and Beth Israel Deaconess Medical Center in Boston, who led the study, said in a statement.


Posted in: Clinical Trials, Neuroscience/Mental Health, Pharmaceuticals, Science and the Media

Leave a Comment (97) →
Page 22 of 36 «...102021222324...»