This is an addendum to my previous entry on Bayesian statistics for clinical research.† After that posting, a few comments made it clear that I needed to add some words about estimating prior probabilities of therapeutic hypotheses. This is a huge topic that I will discuss briefly. In that, happily, I am abetted by my own ignorance. Thus I apologize in advance for simplistic or incomplete explanations. Also, when I mention misconceptions about either Bayesian or “frequentist” statistics, I am not doing so with particular readers in mind, even if certain comments may have triggered my thinking. I am quite willing to give readers credit for more insight into these issues than might be apparent from my own comments, which reflect common, initial difficulties in digesting the differences between the two inferential approaches. Those include my own difficulties, after years of assuming that the “frequentist” approach was both comprehensive and rational—while I had only a cursory understanding of it. That, I imagine, placed me well within two standard deviations of the mean level of statistical knowledge held by physicians in general.
Archive for Medical Academia
Homeopathy and Science: Discussion, Summary and Conclusions
I was not surprised by a couple of the dissenting comments after Part IV of this blog. One writer worried that I had neglected, presumably for nefarious reasons, to cite replications of Benveniste’s results; another cited several examples of “positive” homeopathy studies that I had failed to mention. I answered some of those points here. I am fully aware of such “positive” reports, including those seeming to support Benveniste. I didn’t cite them, but not in some futile hope of concealing their existence from the watchful eyes of the readership. I also didn’t cite several “negative” reports, including an independent, disconfirming report of one of the claims of David Reilly, whose words began this series,* and the most recent of several reviews (referenced here) to conclude that “the clinical effects of homoeopathy are placebo effects.” I didn’t cite those reports for the same reasons that I didn’t cite the “positive” studies: they are mere footnotes to the overwhelming evidence against homeopathy.
To explain why, it will be necessary to discuss some of the strengths and weaknesses of the project known as “Evidence-Based Medicine.”
The National Center for Complementary and Alternative Medicine (NCCAM): Your tax dollars hard at work
What’s an advocate of evidence- and science-based medicine to think about the National Center for Complementary and Alternative Medicine, better known by its abbrevation NCCAM? As I’ve pointed out before, I used to be somewhat of a supporter of NCCAM. I really did, back when I was more naïve and idealistic. Indeed, as I mentioned before, when I first read Wally Sampson’s article Why NCCAM should be defunded, I thought it a bit too strident and even rather close-minded. At the time, I thought that the best way to separate the wheat from the chaff was to apply the scientific method to the various “CAM” modalities and let the chips fall where they may.
Two developments over the last several years have led me to sour on NCCAM and move towards an opinion more like Dr. Sampson’s. First, after its doubling from FY 1998-2003, the NIH budget stopped growing. In fact, adjusting for inflation, the NIH budget is now contracting. NCCAM’s yearly budget remains in the range of $121 million a year, for well over $1 billion spent since its inception as the Office of Alternative Medicine in 1993. Its yearly budget contains enough money to fund around 75 to 100 new five year R01 grants, give or take. In tight budgetary times my view is that it is a grossly irresponsible use of taxpayer money not to prioritize funding for projects that have hypotheses behind them that have a reasonable chance of being true. Scarce NIH funds should not be for projects that have as their basis hypotheses that are outlandishly implausible from a scientific standpoint. Second, I’ve seen over the last few years how NCCAM is not only funding research (most of which is of the sort that wouldn’t stand a chance in a study section from other Institutes or Centers)) but it’s funding training programs. Indeed, that was the core complaint against NCCAM: that it facilitates and promotes the infiltration of nonscience- and nonevidence-based treatments falling under the rubric of so-called “complementary and alternative” or “integrative” medicine into academic medicine. However, NCCAM cannot do otherwise, given its mission:
- Explore complementary and alternative healing practices in the context of rigorous science.
- Train complementary and alternative medicine researchers.
- Disseminate authoritative information to the public and professionals.
If, in fact, NCCAM actually did devote itself solely to “rigorous science” with regard to “alternative” healing practices, I would have much less problem with it than I do. However, it broadly interprets the second and third parts of its mission. For example, it views part of its mission as promotion, rather than study: “Supporting integration of proven CAM therapies. Our research helps the public and health professionals understand which CAM therapies have been proven to be safe and effective.” This would be all well and good if NCCAM had as yet actually proven any CAM therapies to be at least effective, but it has not. Worse, it has not even managed to demonstrate any of them to be ineffective, either, thus leading to endless studies of modalities that either do not work or at the very least would have marginal efficacy.
Still, I thought; All questions of promotion of CAM modalities aside, least there’s the science. Surely, under the auspices of the NIH, NCCAM must be funding some high-quality studies into CAM modalities that couldn’t be done any other way. That thought died when NCCAM announced last week the studies that it had funded during FY 2007.
Annals of Questionable Evidence: a new study reveals substantial publication bias in trials of anti-depressants
Part IV of the ongoing Homeopathy series will have to wait a day or two, because it is superceded by a recent, comment-worthy publication. Nevertheless, “H series” fans will find here a bit of grist for that mill, too.
An important role for this blog is to discuss problems of interpreting data from clinical studies. Academic medicine has committed itself, on the whole, to scientific rigor—to the extent that this is possible in messy, clinical (especially human) trials. Several tools have been proposed, and to a varying extent used, to enhance the rigor of clinical research and the reporting of clinical research. One of those tools is the registering of clinical trials prior to recruiting subjects. Registration would stipulate a trial’s a priori hypothesis(es), design, planned endpoints, and planned statistical methods, among other things. This would guard against several problems: publication bias—the tendency for some trials, usually “negative” ones, to go unreported; selective reporting of the results of a trial, if some are pleasing but others are not; and post hoc data analysis—finding data after the fact to suggest a novel hypothesis that will falsely be portrayed as an a priori hypothesis. Publication bias is also known as “selective publication” or the “file drawer problem”; post hoc analysis is also known as “data dredging” or “HARKing” (Hypothesizing After the Results are Known).
An article in the Jan. 17 issue of the New England Journal of Medicine demonstrates the usefulness of a trial registry:
Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy
Erick H. Turner, M.D., Annette M. Matthews, M.D., Eftihia Linardatos, B.S., Robert A. Tell, L.C.S.W., and Robert Rosenthal, Ph.D.
The infiltration of complementary and alternative medicine (CAM) and “integrative medicine” into academia
A few years back, my co-blogger Wally Sampson wrote a now infamous editorial entitled Why the National Center for Complementary and Alternative Medicine (NCCAM) Should Be Defunded. When I first read it, I must admit, I found it to be a bit harsh and–dare I say?–even close-minded. After all, plausibility aside, I believed at the time that the only way to demonstrate once and for all in a way that everyone would have to accept that many of these “alternative” therapies were no more effective than a placebo would be to do high-quality randomized clinical trials to test whether they worked, and NCCAM seemed to be the perfect funding agency to see that this occurred. Yes, this attitude in retrospect was quite naïve, as I have since learned the hard lesson over several years that no amount of studies will convince advocates of complimentary and alternative medicine (CAM) that their favored therapy doesn’t work, be it chelation therapy for autism or cardiovascular disease, homeopathy, reiki, or various other “energy” therapies that invoke manipulation of qi as a means of “healing,” such as acupuncture, but that is what I believed at the time.
Part II of this blog† introduced the homeopathic understanding of “symptoms” as they pertain both to “provings” in healthy subjects (now called “homeopathic pathogenic trials” or “HPTs”) and to histories elicited from patients. Hahnemann conflated “symptoms” and every random itch, ache, pain, sniffle, feeling, thought, dream, pimple or other sign, and anything else that might occur to a subject or a patient. This was amply demonstrated by Oliver Wendell Holmes, Sr., who seemed to doubt that such a morass would yield useful information. As unlikely as it may seem, today’s homeopaths are every bit as whimsical in their elicitation of “symptoms” as was Hahnemann.
Last week’s post was about a recent (October 2007) meeting held at Harvard University on the subject of fascia. The purposes for commenting were several.
First, the organizers were partial believers in some forms of “Complementary and Alternative Medicine” (“CAM”), now being called “Integrative” but more realistically called sectarian or anomalous, aberrant medicine. The meeting is another in a long series of associating sectarian medicines with science – a bad fit.
Second, it illustrated an increasing infiltration of sectarianism, ideological thinking, and pseudoscience into medical schools and academia.
Third, this infiltrating change is no natural evolution, but is a political and economically driven external force, intent on both selfish and ideological interest. The forces are intent on radically changing society with medicine as the point of their phalanx. They chose medicine because of its admitted openness and self-criticism (no trade secrets, no state secrets, no top secret clearances; its self-criticism is open for all to see.) A vulnerable and often willing victim.
The name of this blog is Science-Based Medicine. The reason it is so called is because we, the bloggers who will be contributing, believe that “the best method for determining which interventions and health products are safe and effective is, without question, good science.” Sadly, one of the people who best represented this very sort of philosophy, Dr. Judah Folkman (1933-2008), has died. Dr. Folkman was the epitome of everything that a science-based surgeon or physician should be, and he was first among my scientific and surgical heroes.
On October 3,4, 2007, a conference at Harvard University School of medicine, the first annual “Fascia Research Conference“ was held, sponsored by a notable group of organizations. Organized by Thomas Findley, MD, Phd, Prof. of Physical Medicine and physiatrist at Veterans Administration Hospital East Orange, New Jersey. It was notable for several reasons, and is of interest to medical objectivists – also for several other reasons. First, the conference was the first research conference devoted solely to the study of fascia (a type of connective tissue) – stated to be a forgotten tissue. Second, it included scientific subjects such as intra-cellular structure and stress changes in fascial cells, but also unscientific ones such as on acupuncture and “Rolfing.”