Archive for Clinical Trials

Vertebroplasty for compression fractures due to osteoporosis: Placebo medicine

If there’s one thing we emphasize here on the Science-Based Medicine blog, it’s that the best medical care is based on science. In other words, we are far more for science-based medicine, than we are against against so-called “complementary and alternative medicine” (CAM). My perspective on the issue is that treatments not based on science need to be either subjected to scientific scrutiny if they have sufficient prior plausibility or strong clinical data suggesting efficacy or abandoned if they do not.

Unfortunately, even though the proportion of medical therapies not based on science is far lower than CAM advocates would like you to believe, there are still more treatments in “conventional” medicine that are insufficiently based on science or that have never been validated by proper randomized clinical trials than we as practitioners of science-based medicine would like. This is true for some because there are simply too few patients with a given disease; i.e., the disease is rare. Indeed, for some diseases, there will never be a definitive trial because they are just too uncommon. For others, it’s because of what I like to call medical fads, whereby a treatment appears effective anecdotally or in small uncontrolled trials and, due to the bandwagon effect, becomes widely adopted. Sometimes there is a financial incentive for such treatments to persist; sometimes it’s habit. Indeed, there’s an old saying that, for a treatment truly to disappear, the older generation of physicians has to retire or die off.

That is why I consider it worthwhile to write about a treatment that appears to be on the way to disappearing. At least, I hope that’s what’s going on. It’s also a cautionary tale about how the very same sorts of factors, such as placebo effects, reliance on anecdotal evidence, and regression to the mean, can bedevil those of us dedicated to SBM just as much as it does the investigation of CAM. It should serve as a warning to those of us who might feel a bit too smug about just how dedicated to SBM modern medicine is. Given that the technique in question is an invasive (although not a surgical technique), I also feel that it is my duty as the resident surgeon on SBM to tackle this topic. On the other hand, this case also demonstrates how SBM is, like the science upon which it is based, self-correcting. The question is: What will physicians do with the most recent information from very recently reported clinical trials that clearly show a very favored and lucrative treatment does not work better than a placebo?

Here’s the story that illustrates these issues, fresh from the New York Times this week:

Posted in: Clinical Trials, Science and Medicine, Surgical Procedures

Leave a Comment (33) →

Are one in three breast cancers really overdiagnosed and overtreated?

ResearchBlogging.orgScreening for disease is a real pain. I was reminded of this by the publication of a study in BMJ the very day of the Science-Based Medicine Conference a week and a half ago. Unfortunately, between The Amaz!ng Meeting and other activities, I was too busy to give this study the attention it deserved last Monday. Given the media coverage of the study, which in essence tried to paint mammography screening for breast cancer as being either useless or doing more harm than good, I thought it was imperative for me still to write about it. Better late than never, and I was further prodded by an article that was published late last week in the New York Times about screening for cancer.

If there’s one aspect of medicine that causes more confusion among the public and even among physicians, I’d be hard-pressed to come up with one more contentious than screening for disease, be it cancer, heart disease, or whatever. The reason is that any screening test is by definition looking for disease in an asymptomatic population, which is very different from looking for a cause of a patient’s symptoms. In the latter case, the patient is already being troubled by something that is bothering him. There may or may not be a cause in the form of a disease or syndrome that is responsible for the symptoms, but the very existence of the symptoms clues the physician in that there may be something going on that requires treatment. The doctor can then narrow down range of possibilities for what may be the cause of the patient’s symptoms by taking a careful history and physical examination (which will by themselves most often lead to the diagnosis). Diagnostic tests, be they blood tests, X-rays, or other tests, then tend to be more confirmatory of the suspected diagnosis than the main evidence supporting a diagnosis.

Posted in: Cancer, Clinical Trials, Diagnostic tests & procedures, Public Health, Science and Medicine, Science and the Media

Leave a Comment (13) →

The clinician-scientist: Wearing two hats

About a week ago, Tim Kreider wrote an excellent post about the differences between medical school training and scientific training. As the only other denizen of Science-Based Medicine who has experienced both worlds, that of a PhD and that of an MD, and as the one who two decades further along the path than Tim (give or take a couple of years), his musings reminded me of similar musings I’ve had over the years, as well as emphasizing yet again something I’ve said time and time again: Most physicians are not scientists. They are not trained like scientists; they are trained to apply scientific knowledge to the care of their patients. That’s what science-based medicine is, after all, applying science to the care of patients. Not dogma. Not tradition. Not knowledge of antiquity. Science.

Leave dogma, tradition, and “ancient knowledge” to practitioners of “alternative medicine.” That’s where they all belong. Whether you want to call it “alternative medicine,” “complementary and alternative medicine” (CAM), or “integrative” medicine (IM), it rarely changes and almost never abandons therapies that science finds to be no better than placebo, whereas scientific medicine is, as it should be, ever changing, ever improving. I’ll grant you that the process is often messy. There are often false starts and blind alleys, and physicians are all too often reluctant to change their practices in response to the latest scientific findings. We sometimes even joke that for some practices, it takes the supplanting of one generation of physicians with a new generation to get rid of some practices. But change does come when the science and evidence are there. Indeed, for example, in response to evidence that a bacterium, H. pylori, causes duodenal ulcers, medical practice changed in a mere decade, which is about as fast as anyone could do the science and clinical trials to show the validity of the new concept. Although CAM practitioners like to hold up the example of Barry Marshall and Robin Warren, the researchers who discovered that H. pylori causes most duodenal ulcers, as an example of how researchers with radical ideas are ostracized, but that story is largely a myth, as our very own Kim Atwood showed.

The application of science to medicine is a difficult thing. It takes basic scientists and clinicians, but the two of them exist in different worlds. Or so it often seems. That’s why some individuals seek to straddle both worlds. Tim is one such person. So am I. Unfortunately, most people don’t understand what we do very well. We wear two hats. In my case, I’m a surgeon, and I’m a scientist. In Tim’s case, he’s a scientist and a physician, but he doesn’t yet know what kind of physician he will end up being. At the risk of sounding somewhat arrogant, I believe that we, and others like us, represent an important element in bridging the gap between basic science and clinical science, in, essentially, building a more science-based medicine.

Posted in: Basic Science, Clinical Trials, Medical Academia

Leave a Comment (4) →

NIH Awards $30 Million Research Dollars To Convicted Felons: Cliff’s Notes Version

In case you’re coming late to this discussion (or have ADD), I’ve summarized Dr. Kimball Atwood’s terrific analysis of the ongoing clinical trial (TACT trial) in which convicted felons were awarded $30 million by the NIH.


In one of the most unethical clinical trial debacles of our time, the NIH approved a research study (called the TACT Trial – Trial to Assess Chelation Therapy – a supposed treatment for arteriosclerosis) in which the treatment had no evidence for potential benefit, and clear evidence of potential harm – and even the risk of death. Amazingly, the researchers neglected to mention this risk in their informed consent document. The NIH awarded $30 million of our tax dollars to ~100 researchers to enroll 2000 patients in this risky study (ongoing from 2003-present). Even more astounding is the fact that several of the researchers have been disciplined for substandard practices by state medical boards; several have been involved in insurance fraud; at least 3 are convicted felons.

But wait, there’s more.

The treatment under investigation, IV injection of Na2EDTA, is specifically contraindicated for “generalized arteriosclerosis” by the FDA. There have been over 30 reported cases of accidental death caused by the administration of this drug – and prior to the TACT, 4 RCTs and several substudies of chelation for either CAD or PVD, involving 285 subjects, had been reported. None found chelation superior to placebo.

So, Why Was This Study Approved?

The NIH and the TACT principal investigator (PI) argued that there was a substantial demand for chelation, creating a “public health imperative” to perform a large trial as soon as possible. In reality, the number of people using the therapy was only a small fraction of what the PI reported.

It’s hard to know exactly what happened “behind the scenes” to pressure NIH to go forward with the study – however a few things are clear: 1) the National Heart, Lung, and Blood Institute (NHLBI) initially declined to approve the study based on lack of scientific merit 2) congressman Dan Burton and at least one of his staffers (Beth Clay) and a lobbyist (Bill Chatfield) worked tirelessly to get the study approved through a different institute – NCCAM 3) some of the evidence used to support the trial was falsified (The RFA cited several articles by Edward McDonagh, the chelationist who had previously admitted in a court of law to having falsified his data.) 4) The NIH Special Emphasis Panel that approved the TACT protocol included L. Terry Chappell, whom the protocol had named as a participant in the TACT.

All evidence seems to suggest that political meddling managed to trump science in this case – putting the lives of 2000 study subjects at risk, without any likely benefit to them or medicine.

A formal analysis of the sordid history and ethical violations of the TACT trial was published by the Medscape Journal of Medicine on May 13, 2008. Atwood et al. provide a rigorous, 9-part commentary with 326 references in review of the case. Congressman Burton’s staffer, Beth Clay, published what is essentially a character assassination of Dr. Atwood in response.

The NIH Writes TACT Investigators a Strongly Worded Letter

On May 27, 2009 the Office for Human Research Protections Committee sent a letter to the investigators of TACT, stating that they found, “multiple instances of substandard practices, insurance fraud, and felony activity on the part of the investigators.” The letter describes a list of irregularities and recommends various changes to the research protocol.

It is almost unheard of for a letter from the NIH to state that research study investigators are guilty of fraud and felony activity – but what I don’t understand is why they haven’t shut down the study. Perhaps this is their first step towards that goal? Let’s hope so.


The TACT trial has subjected 2000 unwary subjects and $30 million of public money to an unethical trial of a dubious treatment that, had it been accurately represented and judged by the usual criteria, would certainly have been disqualified. Political meddling in health and medical affairs is dangerous business, and must be opposed as strongly as possible. Congressmen like Tom Harkin and Dan Burton should not be allowed to push their political agendas and requests for publicly funded pseudoscience on the NIH. I can only hope that the new NIH director will have the courage to fend off demands for unethical trials from political appointees.

Posted in: Clinical Trials, Health Fraud, Medical Ethics, Politics and Regulation, Science and Medicine

Leave a Comment (3) →

Healing Touch and Coronary Bypass

A study published in Alternative Therapies in Health and Medicine is being cited as evidence for the efficacy of healing touch (HT). It enrolled 237 subjects who were scheduled for coronary bypass, randomized them to receive HT, a visitor, or no treatment; and found that HT was associated with a greater decrease in anxiety and shorter hospital stays.

This study is a good example of what I have called “Tooth Fairy Science.” You can study how much money the Tooth Fairy leaves in different situations (first vs. last tooth, age of child, tooth in baggie vs. tooth wrapped in Kleenex, etc.), and your results can be replicable and statistically significant, and you can think you have learned something about the Tooth Fairy; but your results don’t mean what you think they do because you didn’t stop to find out whether the Tooth Fairy was real or whether some more mundane explanation (parents) might account for the phenomenon. (more…)

Posted in: Clinical Trials, Energy Medicine

Leave a Comment (26) →

Does popularity lead to unreliability in scientific research?

One of the major themes here on the Science-Based Medicine (SBM) blog has been about one major shortcoming of the more commonly used evidence-based medicine paradigm (EBM) that has been in ascendance as the preferred method of evaluating clinical evidence. Specifically, as Kim Atwood (1, 2, 3, 4, 5, 6, 7, 8) has pointed out before, EBM values clinical studies above all else and devalues plausibility based on well-established basic science as one of the “lower” forms of evidence. While this sounds quite reasonable on the surface (after all, what we as physicians really want to know is whether a treatment works better than a placebo or not), it ignores one very important problem with clinical trials, namely that prior scientific probability matters. Indeed, four years ago, John Ioannidis made a bit of a splash with a paper published in JAMA entitled Contradicted and Initially Stronger Effects in Highly Cited Clinical Research and, more provocatively in PLoS Medicine, Why Most Published Research Findings Are Wrong. In his study, he examined a panel of highly cited clinical trials and determined that the results of many of them were not replicated and validated in subsequent studies. His conclusion was that a significant proportion, perhaps most, of the results of clinical trials turn out not to be true after further replication and that the likelihood of such incorrect results increases with increasing improbability of the hypothesis being tested.

Not surprisingly, CAM advocates piled onto these studies as “evidence” that clinical research is hopelessly flawed and biased, but that is not the correct interpretation. Basically, as Steve Novella and, especially, Alex Tabarrok pointed out, prior probability is critical. What Ioannidis’ research shows is that clinical trials examining highly improbable hypotheses are far more likely to produce false positive results than clinical trials examining hypotheses with a stronger basis in science. Of course, estimating prior probability can be tricky based on science. After all, if we could tell beforehand which modalities would work and which didn’t we wouldn’t need to do clinical trials, but there are modalities for which we can estimate the prior probability as being very close to zero. Not surprisingly (at least to readers of this blog), these modalities tend to be “alternative medicine” modalities. Indeed, the purest test of this phenomenon is homeopathy, which is nothing more than pure placebo, mainly because it is water. Of course, another principle that applies to clinical trials is that smaller, more preliminary studies often yield seemingly positive results that fail to hold up with repetition in larger, more rigorously designed randomized, double-blind clinical trials.

Last week, a paper was published in PLoS ONE Thomas by Thomas Pfeiffer at Harvard University and Robert Hoffmann at MIT that brings up another factor that may affect the reliability of research. Oddly enough, it is somewhat counterintuitive. Specifically, Pfeiffer and Hoffmann’s study was entitled Large-Scale Assessment of the Effect of Popularity on the Reliability of Research. In other words, the hypothesis being tested is whether the reliability of findings published in the scientific literature decreases with the popularity of a research field. Although this phenomenon is hypothesized based on theoretical reasoning, Pfeiffer and Hoffmann claim to present the first empirical evidence to support this hypothesis.

Posted in: Basic Science, Clinical Trials, Science and Medicine

Leave a Comment (12) →

Screening Tests – Cumulative Incidence of False Positives

It’s easy to think of medical tests as black and white. If the test is positive, you have the disease; if it’s negative, you don’t. Even good clinicians sometimes fall into that trap. Based on the pre-test probability of the disease, a positive test result only increases the probability by a variable amount. An example: if the probability that a patient has a pulmonary embolus (based on symptoms and physical findings) is 10% and you do a D-dimer test, a positive result raises the probability of PE to 17% and a negative result lowers it to 0.2%.

Even something as simple as a throat culture for strep throat can be misleading. It’s possible to have a positive culture because you happen to be an asymptomatic strep carrier, while your current symptoms of fever and sore throat are actually due to a virus. Not to mention all the things that might have gone wrong in the lab: a mix-up of specimens, contamination, inaccurate recording…

Mammography is widely used to screen for breast cancer. Most patients and even some doctors think that if you have a positive mammogram you almost certainly have breast cancer. Not true. A positive result actually means the patient has about a 10% chance of cancer. 9 out of 10 positives are false positives.

But women don’t just get one mammogram. They get them every year or two. After 3 mammograms, 18% of women will have had a false positive. After ten exams, the rate rises to 49.1%. In a study of 2400 women who had an average of 4 mammograms over a 10 year period, the false positive tests led to 870 outpatient appointments, 539 diagnostic mammograms, 186 ultrasound examinations, 188 biopsies, and 1 hospitalization. There are also concerns about changes in behavior and psychological wellbeing following false positives.

Until recently, no one had looked at the cumulative incidence of false positives from other cancer screening tests. A new study in the Annals of Family Medicine has done just that. (more…)

Posted in: Clinical Trials, Diagnostic tests & procedures

Leave a Comment (15) →

Tactless About TACT: Critiques Without Substance Should Be Abandoned

In May 2008, the article “Why the NIH Trial to Assess Chelation Therapy (TACT) Should Be Abandoned” was published online in the Medscape Journal of Medicine. The authors included two of our own SBM bloggers, Kimball Atwood and Wallace Sampson, along with Elizabeth Woeckner and Robert Baratz. It showed that the existing evidence on treating heart disease with IV chelation did not justify further study, and that the TACT trial was questionable on several ethical points. Their ethical concerns were taken seriously enough that enrollment in the trial was put on hold pending an investigation. It has now been re-opened after a few band-aids were applied to the ethical concerns. The scientific concerns were never addressed.

I have seen many critiques of the Atwood study, and not a single one has offered any cogent criticism of its factual content or reasoning. Most of them could have been written by someone who had not bothered to read beyond the title. Their arguments can be boiled down to a few puerile points that can be further simplified to:

(1) I believe the testimonial evidence that chelation works.
(2) Atwood and his co-authors are bad guys.

Now Beth Clay has chimed in with an article entitled “Study of Chelation Therapy Should Not Be Abandoned.” I found it truly painful to read, but even the worst has some value as a bad example. Clay’s article could be used for a game of “Count the Errors.” I will point out some of them below. (more…)

Posted in: Clinical Trials, Politics and Regulation, Science and Medicine

Leave a Comment (10) →

How do scientists become cranks and doctors quacks?

As a physician and scientists who’s dedicated his life to the application of science to the development of better medical treatments, I’ve often wondered how formerly admired scientists and physicians fall into pseudoscience or even generate into out-and-out cranks. Examples are numerous and depressing to contemplate. For example, there’s Linus Pauling, a highly respected chemist and Nobel Laureate, who in his later years became convinced that high dose vitamin C could cure cancer. Indeed, Pauling’s belief that high dose vitamin C could cure the common cold and cancer fueled the development of a whole new form of quackery known as “orthomolecular medicine,” whose entire philosophy seems to be based on the concept that if some vitamins are good more must be better. In essence, “orthomolecular medicine” is a parody of nutritional science; indeed, its advocates take credit for how some strains of “complementary and alternative medicine” (CAM) so frequently advocate the ingestion of huge amounts of dietary “supplements.” I could even go farther and say that orthomolecular medicine is clearly a major part of the “intellectual” (and I do use that term loosely) underpinning of the various biomedical treatments for autism that Jenny McCarthy and Generation Rescue advcoate.

There are other examples as well, all just as depressing to contemplate. For example, consider Peter Duesberg, a brilliant virologist who in the 1980s was widely believed to be on track for a Nobel Prize; that is, until he became fixated on the idea that HIV does not cause AIDS. True, lately he’s been trying to resurrect his scientific reputation with his interesting and possibly even promising chromosomal aneuploidy hypothesis of cancer, but, alas, true to form he’s been doing it by acting like a crank. Specifically, he sees his hypothesis as The One True Cause of Cancer and disparages conventional thinking as having been so very, very wrong all these years (with his being, of course, so very, very brilliant that he saw what no one else could see). Then there are people like Dr. Lorraine Day, who was a respected academic orthopedic surgeon in the 1980s. In the late 1980s, she started to flirt with AIDS pseudoscience through a scare campaign about catching AIDS from aerosolized blood. Of course, given the mystery and fear over HIV in the early years of the epidemic, such a fear, although overblown, was not so far out of the mainstream as to be worthy of the appellation crank. However, after being diagnosed with breast cancer, unfortunately Dr. Day rapidly degenerated into a purveyor of rank pseudoscience, as well as a New World Order conspiracy theorist, religious loon, and Holocaust denier. And let’s not forget Mark Geier, who, although not a distinguished scientist, did, before his conversion to antivaccinationism, apparently do a real fellowship at the NIH and appeared to be on track to a respectable, maybe even impressive, career as an academic physician. Now he’s doing “research” in his basement, injecting autistic children with a powerful anti-sex hormone drug and abusing epidemiology. There are innumerable other examples.

Posted in: Clinical Trials, Health Fraud, Science and Medicine

Leave a Comment (116) →

Homeocracy IV

In the three prior posts of this series I tried to analyze some of the defects in the randomized clinical rials (RCTs) of homeopathic remedies for childhood diarrhea. The first entry showed that the first two RCTs’ (done in Nicaragua) methods could not produce a meaningful result because of the way the RCTs were set up (methods.) The second entry showed that the results obtained in the first two trials were meaningless clinically even if assumed to have resulted from more legitimate methods. The same applied to the third trial in Nepal, analyzed in the third entry.

This entry  will suggest that the authors’ fourth paper (Jacobs J, Jonas WB, Jimenez-Perez M, Crothers D. Homeopathy for childhood diarrhea: combined results and metaanalysis from three randomized, controlled clinical trials.  Pediat Inf Dis J, 2005;22:229-234.)- a meta-analysis (MA) of the data from the three RCTs resulted in conclusions equally as meaningless as those of the three trials.

The MA authors – several of the same workers from the three RCTs – begin by agreeing that the data from the RCTs, taken individually, were of borderline significance:

In our previous three studies, we evaluated the use of individualized homeopathic treatment of childhood diarrhea … The results of the two larger studies (n = 81, n = 116) were just at or near level of statistical significance. Because all three studies followed the same basic study design , […] we analyzed the combined data from these three studies to obtain greater statistical power.  In addition we conducted a meta-analysis of effect-size difference […] to look for consistency of effects.

MAs and systematic reviews (SRs) are the two consensus methods for summarizing data from multiple individual studies. The inclusion and search methods of RCTs for SRs and MAs are similar, but the objectives of the two are a bit different, as are the forms of the reports.  In SRs, the results are summarized  in more in narrative form, whereas in MAs the data are treated mathematically and the results are defined in statistical terms.  Thus authors of SRs are freer to speculate on the degree of confidence that a method is effective based on what is shown by the numbers of positive and negative RCTs collected.  Authors of MAs usually limit their comments to what the mathematical formulation of the summarized data show.

Posted in: Clinical Trials, Energy Medicine, Homeopathy, Science and Medicine

Leave a Comment (5) →
Page 23 of 32 «...10202122232425...»