Articles

Fun with homeopaths and meta-analyses of homeopathy trials

ResearchBlogging.orgHomeopathy amuses me.

Well, actually it both amuses me and appalls me. The amusement comes from just how utterly ridiculous the concepts behind homeopathy are. Think about it. It is nothing but pure magical thinking. Indeed, at the very core of homeopathy is a concept that can only be considered to be magic. In homeopathy, the main principles are that “like heals like” and that dilution increases potency. Thus, in homeopathy, to cure an illness, you pick something that causes symptoms similar to those of that illness and then dilute it from 20C to 30C, where each “C” represents a 1:100 dilution. Given that such levels of dilution exceed Avagaddro’s number by many orders of magnitude, even if any sort of active medicine was used, there is no active ingredient left after a series of homeopathic dilutions. Indeed, this was known as far back as the mid-1800′s. Of course, this doesn’t stop homeopaths, who argue that water somehow retains the “essence” of whatever homeopathic remedy it has been in contact with, and that’s how homeopathy “works.” Add to that the mystical need to “succuss” (vigorously shake) the homeopathic remedy at each dilution (I’ve been told by homeopaths, with all seriousness, that if each dilution isn’t properly succussed then the homeopathic remedy will not attain its potency), and it’s magic all the way down, just as creationism has been described as “turtles all the way down.” Even more amusing are the contortions of science and logic that are used by otherwise intelligent people to make arguments for homeopathy. For example, just read some of Lionel Milgrom‘s inappropriate invocations of quantum theory at the macroscopic level for some of the most amazing woo you’ve ever seen, or Rustum Roy‘s claims for the “memory of water.” Indeed, if you want to find out just how scientifically bankrupt everything about homepathy is, my co-blogger Dr. Kimball Atwood started his tenure on Science-Based Medicine with a five part series on homeopathy.

At the same time, homeopathy appalls me. There are many reasons for this, not the least of which is how anyone claiming to have a rational or scientific viewpoint can fall so far as to twist science brutally to justify magic. Worse, homepaths and physicians sucked into belief into the sorcery that his homeopathy are driven by their belief to carry out unethical clinical trials in Third World countries, even on children. Meanwhile, time, resources, and precious cash are wasted chasing after pixie dust by our own government through the National Center for Complementary and Alternative Medicine (NCCAM). So while I laugh at the antics of homeopaths going on and on about the “memory of water” or quantum gyroscopic models” in order to justify homeopathy as anything more than an elaborate placebo, I’m crying a little inside as I watch.

The Lancet, meta-analysis, and homeopathy

If there’s one thing that homepaths hate–I mean really, really, really hate–it’s a meta-analysis of high quality homeopathy trials published by Professor Matthias Egger in the Department of Social and Preventative Medicine at the University of Berne in Switzerland, entitled Are the clinical effects of homoeopathy placebo effects? Comparative study of placebo-controlled trials of homoeopathy and allopathy.

What Shang et al did in this study was very simple and very obvious. They applied the methods of meta-analysis to trials of homeopathy and allopathy. (I really hate that they used the term “allopathy” to distinguish scientific medicine from homeopathy, although I can understand why they might have chosen to do that for simple convenience’s sake. Still, it grates.) In any case, they did a comprehensive literature search for placebo-controlled trials of homeopathy and then randomly selected trials of allopathy matched for disorder and therapeutic outcome. Criteria used for both were controlled trials with a randomized parallel design and a placebo control with sufficient data presented in the published report to allow the calculation of an odds ratio. These studies were assessed for quality using measures of internal validity, including: randomization, blinding or masking, and data analysis by intention to treat or other. Standard measures of how good the randomization and blinding techniques used in each study are. To boil down the results, the lower the quality of the trial and the smaller the numbers, the more likely a trial of homeopathy was to report an odds ratio less than one (the lower the number the more “positive”–i.e., therapeutic–the effect). The higher the quality of the study and the greater the number of subjects, the closer to 1.0 its odds ratio tended to be. The same was true for trials of allopathy as well, not surprisingly. However, analysis of the highest quality trials showed a range of odds ratios with a 95% confidence interval that overlapped 1.0, which means that there was no statistically significant difference between them and 1.0; i.e., there was not detectable effect. For the very highest quality trials of allopathy, however, there was still an odds ratio less than 1.0 whose confidence level did not include 1.0. The authors concluded:

We acknowledge that to prove a negative is impossible, but we have shown that the effects seen in placebocontrolled trials of homoeopathy are compatible with the placebo hypothesis. By contrast, with identical methods, we found that the benefits of conventional medicine are unlikely to be explained by unspecific effects.

The problems with meta-analysis notwithstanding, as an exercise in literature analysis, Shang et al was a beautiful demonstration that whatever effects due to homeopathy “detected” in clinical trials are nonspecific and not detectably different from placebo effects, exactly as one would anticipate based on the basic science showing that homeopathy cannot work unless huge swaths of our current understanding of physics and chemistry are seriously in error. After all, homeopathic dilutions greater than 12C or so are indistinguishable from water. It’s thus not surprising that homeopaths have been attacking Shang et al beginning the moment it was first published. Indeed, they’ve attacked Dr. Egger as biased and even tried to twist the results to claiming that homepathy research is higher quality than allopathy research. Shang et al may not be perfect, but it’s pretty compelling evidence strongly suggesting that homeopathy is no better than placebo, and the interpretation that, just because more of homeopathy studies identified in the study were of higher quality does not mean that homeopathic research is in general of higher quality than scientific medical research.

Shang et al “blown out of the water”?

Recently, a certain well-known homeopath who’s appeared not only on this blog to defend homeopathy but on numerous blogs has reappeared. His name is Dana Ullman, and he’s recently reappeared on this blog to comment in a post that is several months old and happens to be about homeopathy trials in Third World countries. Indeed, I sometimes think that periodically Mr. Ullman gets bored and decides to start doing blog searches on homeopathy, the better to harass bloggers who criticize his favored form of pseudoscience. No doubt he will appear here as a result of this post, mainly because he’s lately been crowing about another study that he believes to show that Shang et al has been “blown out of the water,” as he puts it.

That’s actually a rather funny metaphor coming from a homeopath, given that homeopathy is nothing more than water. Suffice it to say that our poor overwrought Mr. Ullman is becoming a bit overheated, as is his wont. The guy could really use some propranolol to settle his heartrate down a bit. In any case, the study to which he refers, entitled The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials and coming from a clearly pro-homeopathy source, Dr. Rutten of the Association of Dutch Homeopathic Physicians, it’s hot off the presses (the electronic presses, that is, given that this is an E-pub ahead of print) in the October issue of the Journal of Clinical Epidemiology. Suffice it to say that, as always, Mr. Ullman is reading far more into the study than it, in fact, actually says.

The first thing that anyone who’s ever read or done a meta-analysis will already know is that the title of this study by Lüdtke and Rutten is about as close to a “Well, duh!” title as there is. Of course, the conclusions of a meta-analysis depend on the choice of trials used for the analyzed set. That’s exactly the reason why the criteria for choosing trials to include in a meta-analysis are so important and need to be stringently decided upon prospectively, before the study is done. If they aren’t, then investigators can cherry pick studies as they see fit.. That’s also exactly why the critieria need to be designed to include the highest quality studies possible. In fact, I’d be shocked if a reanalysis of a meta-analysis didn’t conclude that the results are influenced by the choice of studies. That being said, the results of Lüdtke and Rutten do not in any way invalidate Shang et al.

One thing that’s very clear reading Lüdtke and Rutten is that this study was clearly done to try to refute or invalidate Shang et al. It’s so obvious. Indeed, no one reanalyzes the data from a study unless they think the original conclusions from it were wrong. No one. There’s no motivation otherwise. Otherwise, why bother to go through all the work necessary? Indeed, Lüdtke and Rutten show this right from the beginning:

Shang’s analysis has been criticized to be prone to selection bias, especially when the set of 21 high quality trials was reduced to those eight trials with large patient numbers. In a letter to the Lancet, Fisher et al. posed the question: ‘‘to what extend the meta-analysis results depend on how the threshold for ‘large’ studies was defined [3]. The present article addresses this question. We aim to investigate how Shang’s results would have changed if other thresholds had been applied. Moreover, we extend our analyses to other meaningful subsets of the 21 high quality trials to investigate other sources of heterogeneity, an approach that is generally recommended to be a valuable tool for meta-analyses.

Again, this is a “Well, duh!” observation, but it’s interesting to see what Lüdtke and Rutten did with their analysis, because it more or less reinforces Shang et al‘s conclusions, even though Lüdtke and Rutten try very hard not to admit it. What Lüdtke and Rutten did was to take the 21 high quality homeopathy studies analyzed by Shang et al. First off, they took the odds ratios from the studies and did a funnel plot of odds ratio versus standard error, which is, of course, dependent on trial size. The funnel plot showed an assymetry, which was mainly due to three trials, two of which showed high treatment effects and one of which was more consistent with placebo effect. In any case, however, for the eight highest quality trials, no assymetry was found.

What will likely be harped on by homeopaths is that for all 21 of the “high quality” homeopathy trials, the pooled odds ratio from random effect meta-analysis was 0.76 (confidence interval 0.59-0.99. p=0.039). This is completely underwhelming, of course. Even if real, it would likely represent a clinically irrelevant result. What makes me think it’s not clinically relevant is what Lüdtke and Rutten do next. Specifically, they start with the two largest high quality studies of homeopathy and then serially add studies, from those with the highest numbers of patients to those with the lowest. At each stage they calculated the pooled odds ratio. At two studies, the odds ratio remained at very close to 1.0. After 14 trials, the odds ratio became and remained “significantly” less than 1.0 (except when 17 studies were added). The graph:

0

However, when the authors used meta-regression, a different form of analysis, it didn’t matter how many studies were included. The confidence interval always spanned 1.0, meaning a result indistinguishable statistically from an odds ratio of 1.0:

0a

In other words, if a random effect meta-analysis is used, one can torture marginally significant odds ratios out of the data; if a meta-regression is used, one can’t even manage that! In other words, this study actually shows tht it doesn’t really matter too much which high quality studies are involved, other than that adding lower quality studies to higher quality studies starts to skew the results to seemingly positive values.

Exactly as one who knows anything about meta-analysis would predict.

The amazing thing about Lüdtke and Rutten’s study, though, is just how much handwaving is involved to try to make this result sound like a near-refutation of Shang et al. For example, they start their discussion out thusly:

In our study, we performed a large number of meta-analyses and meta-regressions in 21 high quality trials comparing homeopathic medicines with placebo. In general, the overall ORs did not vary substantially according to which subset was analyzed, but P-values did.

That is, in essence, what was found, and the entire discussion is nothing more than an attempt to handwave, obfuscate, and try to convince readers that there is some problem with Shang et al that render its conclusions much less convincing than they in fact are. Indeed, I fear very much for them. They’ll get carpal tunnel syndrome with all that handwaving. We’re talking cherry picking subset analysis until they can find a subset that shows an “effect.” More amusingly, though, even after doing all of that, this is the best they can come up with:

Our results do neither prove that homeopathic medicines are superior to placebo nor do they prove the opposite. This, of course, was never our intention, this article was only about how the overall resultsdand the conclusions drawn from themdchange depending on which subset of homeopathic trials is analyzed. As heterogeneity between trials makes the results of a meta-analysis less reliable, it occurs that Shang’s conclusions are not so definite as they have been reported and discussed.

I find this particularly amusing, given that Shang et al bent over backwards not to oversell their results or to make more of them than they show. For example, this is what they said about their results:

We emphasise that our study, and the trials we examined, exclusively addressed the narrow question of whether homoeopathic remedies have specific effects. Context effects can influence the effects of interventions, and the relationship between patient and carer might be an important pathway mediating such effects.28,29 Practitioners of homoeopathy can form powerful alliances with their patients, because patients and carers commonly share strong beliefs about the treatment’s effectiveness, and other cultural beliefs, which might be both empowering and restorative.30 For some people, therefore, homoeopathy could be another tool that complements conventional medicine, whereas others might see it as purposeful and antiscientific deception of patients, which has no place in modern health care. Clearly, rather than doing further placebo-controlled trials of homoeopathy, future research efforts should focus on the nature of context effects and on the place of homoeopathy in health-care systems.

This is nothing more than a long way of saying that homeopathy is a placebo. However, all the qualifications, discussions of “alliances with patients, and reference to cultural beliefs represent an excellent way to say that homeopathy is a placebo nicely rather than in the combative way that I (not to mention Dr. Atwood) like to affect. One thing I can say for sure, though, is that whatever it is that Lüdtke and Rutten conclude in their study (and, quite frankly, as I read their paper I couldn’t help but think at many points that it’s not always entirely clear just what the heck they are trying to show), it is not that Shang et al is invalid, nor is it evidence that homeopathy works.

Indeed, the very title is misleading in that what the study really does is nothing more than to reinforce the results of Shang et al, looking at them in a different way. Indeed, the whole conclusion of Lüdtke and Rutten seems to be that Shang et al isn’t as hot as everyone thinks, except that they exaggerate how hot everyone thought Shang et al was in order to make that point. That’s about all they could do, after all, as they were about as successful at shooting down Shang et al through reanalysis of the original data as DeSoto and Hitlan were when they “reanalyzed” the dataset used by Ip et al to show no correlation between the presence of autism and elevated hair and blood mercury levels and then got in a bit of a blog fight over it. Again, whenever one investigator “reanalyzes” the dataset of another investigator, they virtually always have an axe to grind. That doesn’t mean it isn’t worthwhile for them to do such reanalyses or that they won’t find serious deficiencies from time to time, but you should always remember that the investigators doing the reanalysis wouldn’t bother to do it if they didn’t disagree with the conclusions and weren’t looking for chinks in the armor to blast open so that they can prove the study’s conclusions wrong. In this, Lüdtke and Rutten failed.

The inadvertent usefulness of homeopathy trials to science-based medicine

Viewing the big picture, I suppose I can say that there is one useful function that trials of homeopathy serve, and that is to illuminate the deficiencies of evidence-based medicine and how our clinical trial system works. Again, the reason is that homeopathy is nothing more than water and thus an entirely inert placebo treatment. Consequently, any positive effects reported for or any positive correlations attributed to homeopathy must be the result of chance, bias, or fraud. Personally, I’m an optimist and as such tend to believe that fraud is uncommon, which leaves chance or bias. Given the known publication bias in which positive studies are more likely to be published and, if published, more likely to be published in better journals, I feel quite safe in attributing the vast majority of “positive” homeopathy trials either to bias or random chance. After all, under the best circumstances, at least 5% of even the best designed clinical trials of a placebo like homeopathy will be seemingly “positive” by random chance alone. But it’s worse than that Dr. John Ioannidis’ groundbreaking research tells us is that the number of false positive trials is considerably higher than 5%. Indeed, lower the prior probability that a trial will show a positive result, the greater the odds of a false positive trial become. That’s the real significance of Ioannidis’ work. Indeed, a commenter on Hawk/Handsaw put described very well how homeopathy studies illuminate the weaknesses of clinical trial design, only not in the way that homeopaths tell us:

…I see all the homeopathy trials as making up a kind of “model organism” for studying the way science and scientific publishing works. Given that homeopathic remedies are known to be completely inert, any positive conclusions or even suggestions of positive conclusions that homeopathy researchers come up with must be either chance findings, mistakes, or fraud.

So homeopathy lets us look at how a community of researchers can generate a body of published papers and even meta-analyze and re-meta-analyze them in great detail, in the absence of any actual phenomenon at all. It’s a bit like growing bacteria in a petri dish in which you know there is nothing but agar.

The rather sad conclusion I’ve come to is that it’s very easy for intelligent, thoughtful scientists to see signals in random noise. I fear that an awful lot of published work in sensible fields of medicine and biology is probably just that as well. Homeopathy proves that it can happen. (the problem is that we don’t know what’s nonsense and what’s not within any given field.) It’s a warning to scientists everywhere.

Indeed it is, and it applies to meta-analysis just as much as any study, given that meta-analysis pools such stucies. It’s also one more reason why we here at Science-Based Medicine emphasize science rather than just evidence. Moreover, failure to take into account prior probability based on science is exactly what we find lacking in the current paradigm of evidence-based medicine. We do not just include trials of “complementary and alternative medicine” (CAM) in this critique, either. However, trials of homeopathy are about as perfect an example as we can imagine to drive home just how easy it is to produce false positives in clinical trials when empiric evidence is valued more than the totality of scientific evidence. There may be other examples of CAM modalities that have specific effects above and beyond that of a placebo (herbal remedies for example, given that they are drugs). There may be. But to an incredibly high degree of certainty, homeopathy is not among them. Homeopathic remedies are, after all, nothing but water, and their efficacy only exists in the minds of homeopaths, who are, whether they realize it or not, masters of magical thinking, or users of homeopathy, who are experiencing the placebo effect first hand. Studies of homeopathy demonstrate why, in the evidence-based medicine paradigm, there will always be seemingly positive studies to which homeopaths can point, even though homeopathic remedies are water.

REFERENCES:

1. A SHANG, K HUWILERMUNTENER, L NARTEY, P JUNI, S DORIG, J STERNE, D PEWSNER, M EGGER (2005). Are the clinical effects of homoeopathy placebo effects? Comparative study of placebo-controlled trials of homoeopathy and allopathy The Lancet, 366 (9487), 726-732 DOI: 10.1016/S0140-6736(05)67177-2

2. R LUDTKE, A RUTTEN (2008). The conclusions on the effectiveness of homeopathy highly depend on the set of analyzed trials Journal of Clinical Epidemiology DOI: 10.1016/j.jclinepi.2008.06.015

Posted in: Clinical Trials, Homeopathy

Leave a Comment (79) ↓

79 thoughts on “Fun with homeopaths and meta-analyses of homeopathy trials

  1. shanek says:

    All you need to know is, there are no homeopathic birth control pills. They never make pills for things where it’s obvious if it works or not.

  2. stavros says:

    Ullman is a well known “troll” that does exactly what you say: going around posting the same old “arguments” for homeopathy when all of those have been debunked a million times…

    I have had a number of encounters with him (e.g. see here) where I have noted that the references he cites do NOT in fact support homeopathy! Yet he keeps on mentioning the same papers in subsequent posts (e.g. see here)!

    apgaylard has also noted this pattern of behaviour and I am sure every thinking blogger has seen it too. I am happy that now you mention it people will stop taking him seriously (but who does anymore anyway?)

  3. stavros says:

    Also, to set your heartrate down you do not need any frikin propranolol!

    You simply need some “Rhus Tox” which is the best for “generalities; pulse; frequent, accelerated, elevated, exalted, fast, innumerable, rapid; faster than the heart-beat;

  4. The conclusions Lutke and Rutten draw from their analysis would have earned an F from my old biostats prof. Their study clearly supports Shang, despite all their hand-waving claiming they’ve refuted him. Confirmation bias is a scary, scary thing– it’s a powerful force for self-deception, which is the enemy of any kind of progress within a discipline.

  5. DevoutCatalyst says:

    From cancure.org:

    “Also, if you have a cancer diagnosis, be sure that you’re working with a practitioner who is using homeopathy aggressively enough for your situation.”

    Google homeopathy and cancer, and you’ll have reason enough to be appalled and not amused. Ok, it’s laughable, some of the things homeopaths say, but in a gallows humor kind of way.

    Homeopaths and their ilk are people who like to play doctor, for people who want to play patient. They also attract people who don’t yet know how to know what they want. This latter group warrants a continued educational outreach.

  6. yeahsurewhatever says:

    Historically, “allopathy” does not refer to science or evidence based medicine. Flexner castigates allopathy in his 1910 report as much as homeopathy or osteopathy, as they are all predicated on “dogma”, and in his view they are superseded by medical practice and education that defers to science.

    When Hahnemann invented the term there was no scientific medicine for the most part. He used the term merely to differentiate homeopathy from mainstream non-homeopathy, from which modern medicine is honestly not a direct descendant. The only thing historical “allopathy” and modern medicine have in common is an unqualified endorsement of the germ theory of disease, unlike homeopathy which endorses a toxin theory of disease, and osteopathy which endorses a shake-and-jiggle theory of disease.

    Next point.

    Meta-analyses are flawed for several reasons. Their usage itself requires ideally that studies committed to the literature are a representative sample of reality. There are well-known biases in what gets published, for example the fact that negative outcomes are not published as often as positive ones, or that an English-language analysis will probably only cover English-language studies.

    A meta-analysis is only as good as the average quality of every study it includes. If you use good statistics on bad data you get bad results. It has no error-correcting mechanism, so each study must individually be without error. There is no objective way to standardize study quality. Qualitative factors are by definition not quantitative. And if you select studies specifically to be without error, attempting to avoid the above problem, you introduce selection bias instead, and risk begging your question.

    Attempting to use abstruse statistical sleight of hand (“meta-regression models” ?) to avoid such problems leads to an analysis that can’t be replicated even by an expert statistician without talking to the original study designer about what exactly they did and why. It is not straightforward, and using such methods at all simply introduces opportunities to be dishonest or disingenuous in subtle ways.

    In short, meta-analyses are not very great evidence of anything most of the time. If it isn’t a Cochrane review, odds are it means nothing, even if half the data was taken from Cochrane, as in this case. But even the best meta-analysis is not great evidence of anything. As a heuristic, the more you abstract data, the less it is worth in the real world. Abstraction is a process which inherently divorces data from reality.

    To say “that’s exactly the reason why the criteria for choosing trials to include in a meta-analysis are so important and need to be stringently decided upon prospectively, before the study is done” is an utter cop-out, since it implies that there’s an objectively correct way to do it, and hey guess what, there’s not.

    This is arguably the worst possible way to use a meta-analysis, since the homeopathy trials either directly show that homeopathy doesn’t work or else contaminate the analysis with bad data. If the first case the meta-analysis is pointless, and if the second it’s worthless. There’s also no way this (or any) analysis could identify or compensate for out-and-out fabrication of results in any study, or a concerted effort to fabricate results in many studies.

    Since it is in fact trivial to demonstrate in a clinical setting that homeopathic remedies are no more useful than any other form of candy, this meta-analysis must be the product of someone with too much time on their hands, well-intentioned but still unhelpful. Fine, homeopathy doesn’t follow the same characteristic bias-response curve compared to real science. No duh. The way this information is presented in the study is not likely to change anybody’s mind. It is weak. “The finding is compatible with the notion that the clinical effects of homeopathy are placebo effects” does not mean the same thing as “the finding is that the clinical effects of homeopathy are placebo effects”. Even referring to “placebo effects” at all is a poor choice of words, since there is in fact no “placebo effect” above and beyond *no* effect. The conclusion is likely to be misinterpreted by the public to give credence to placebo treatment, rather than to take away credence from homeopathic treatment.

    In every field of science there are those people who are too comfortable with mathematical abstraction, and tout it as an answer for everything. But mathematics is not science, and mathematical manipulation is not collecting evidence, and it’s not experimentation. This whole article merely describes two groups who are finding very technical ways to call each other names. It’s not meaningful above and beyond that. This is not a real controversy, and it’s not being addressed in such a way that the general public cares what this study might say. Alexander did not fight the Persians by sending troops to Antarctica. Know where the battle is actually being fought. It’s not in the journals.

    This article has the least merit of any I’ve read on this website so far. I question the motive of writing it. You’re not winning the hearts and minds of the average person this way. The person who doesn’t know what to believe.

  7. Joe says:

    As for fraud in homeopathy, I am told that Arthur Grollman (pharmacology, Stony Brook U., NY) told the BC skeptics society that 1/3 of commercial preps that he surveyed contained real drugs. That was in June 2007; apparently it is not formally published yet.

    According to my contact “More detail: the #1 undeclared active ingredient was caffeine. #2 was synephrine (sometimes misleadingly listed as ‘bitter orange’). After that, it was a smattering of analgesics like aspirin, ground up erectile dysfunction tablets …”

  8. @ yeahsurewhatever:

    I agree with almost everything you’ve written, including that the average person doesn’t know what to believe, and even including your objection to the term “placebo effect”–although that requires more discussion, because it’s not as simple as it may appear (one of these weeks…)

    I completely agree, and have for years,* that individual trials of homeopathy are unjustified (due to the prior probability being zero), and therefore that meta-analyses are equally unjustified–or even more so, for the reasons that you give. I agree with your statements about mathematical abstraction, and observe that they are similar to those made by the Bayesian Steven Goodman ( http://www.sciencebasedmedicine.org/?p=48 and http://www.sciencebasedmedicine.org/?p=55 ).

    I disagree that this post should not have been written, and in particular that it is the equivalent of Alexander sending his troops to Antarctica. One of the reasons that the average person doesn’t know what to believe about homeo is that academic medicine has been giving it all too much credence merely by deeming it worthy of investigation (Cochrane is among the worst: http://www.sciencebasedmedicine.org/?p=42 ). Another is that most mainstream physicians can only speak “EBM” (same reference). Thus showing that homeopathy is placebo even when viewed under the dim, scientifically-challenged light of EBM is important. Not directly important to the average person, maybe, but indirectly through such opinion-drivers as the NIH/medical school/newspaper/magazine/Internet complex or regular old doctors.

    Edzard Ernst was wrong for years to study homeo (and in that he contributed to the ruse), but he was a product of the EBM blind spot that still rules medicine. Now he may be the most effective voice against homeo, and he’s making the same argument that Dave made in this post, minus the prior probability part.

    @ stavros:

    I was partial to “potentized” isoproterenol, but I guess that’d be isopathy, not homeopathy. How about “potentized” siren-of-police-cruiser?

    *Atwood KC. Homeopathy and critical thinking. Sci Rev Alt Med. 2001;5:146–148

  9. Harriet Hall says:

    I have a rule of thumb about meta-analyses. If a meta-analysis shows that something doesn’t work, I believe it. If a meta-analysis says something does work, I reserve judgment.

  10. That’s not a bad rule of thumb. Thanks to publication bias and other biases, it’s easy for a meta-analysis to show a false positive, not so easy to show a false negative.

  11. The finding is compatible with the notion that the clinical effects of homeopathy are placebo effects” does not mean the same thing as “the finding is that the clinical effects of homeopathy are placebo effects”.

    I’m not sure what you’re getting at here, as that is distinction without a real, practical difference. In science, all we can ever find in a clinical study (or meta-analysis, for that matter) is results that are compatible with a hypothesis. In other words, it’s semantics and the way that cautious scientists couch their language in order to acknowledge uncertainty and not to be too definite. After all, we can never completely rule out other possibilities, only assign probabilities based on how incompatible the results are with other hypotheses.

    In fact, in another context, I wrote a rather prolonged discussion of why I’m very skeptical of meta-analyses in general. I’ll have to update it and post it here sometime.

  12. pec says:

    “homeopathy cannot work unless huge swaths of our current understanding of physics and chemistry are seriously in error. ”

    That is not true. There is no scientific reason for rejecting the possibility that water can store information.

  13. DanaUllman says:

    Homeopaths and skeptics of homeopathy should agree on one thing: the “science” behind the Shang paper was junk science. It is more than a tad ironic that people who normally consider themselves to be “defenders of conventional science and conventional medicine” are actually defending the questionable data of the Shang paper as well as the questionable ethics of the authors for their biased reporting and questions the integrity of the journal, The Lancet, for publishing this questionable report.

    The Ludtke/Rutten paper in the Journal of Clinical Epidemiology IS important, as is its companion paper in the October 2008 issue of Homeopathy (published by Elsevier) by Rutten/Stopler.

    Both papers show that Shang’s results (that the effects from homeopathic medicines are the same as that of a placebo) are “less definite” as they have been previously presented. The Lancet editors have even asserted this study now closes the case on homeopathy by publishing an editorial called “The End of Homeopathy.” However, these new analyses show how fatally flawed this study was and how embarrassingly biased its editors have been, as the below assertions prove.

    Shang omitted certain high quality studies in homeopathy (was it a coincidence that the vast majority of these omitted studies had a positive result?), how they defined what is “high quality” is open to question (initially, the authors didn’t even report which studies were defined as “high quality,” and today, there is no clarity on the point score for each study), their decision to never evaluate or compare all of the “high quality” studies (the authors assert that high quality randomized, double-blind and placebo controlled studies are actually “biased” unless they are over 98 subjects in homeopathic studies but magically conventional medical trials are only biased if they are under 146 subjects).

    Even the use of different criteria for the two different systems of medicine throws the comparison into question. In fact, Rutten and Stopler assert that choosing these different numbers was a decision made “post-hoc,” which ultimately questions the integrity of the science and the ethics of the authors.

    If Shang evaluated only those clinical trials that his own group defined as “high quality” (the 21 homeopathic trials and the 9 allopathic trials), there is a statistically significant difference between those patients given a homeopathic medicine and those given a placebo.

    The new re-analysis notes that 2 studies by Reilly, one by de Lange-de-Klerk, and one by Hofmeyr were not defined by Shang as “high quality” but they were defined as such in a major meta-analysis of homeopathic clinical trials conducted by Linde, et al which was published in the Lancet (1997) (in fact, the Reilly papers were published in the Lancet and the BMJ, and both of these journals published editorials that acknowledged the high quality nature of these studies . Three of the above mentioned four trials showed a positive effect towards homeopathic treatment.

    It was interesting to note that Shang excluded Wiesenauer’s chronic polyarthritis study (N=176) because no matching trial could be found (Linde, 1997, defined this study as “high quality”). And yet, because none of the trials (!) in the final evaluation matched each other in any way, omitting inclusion of this study was the result of bias from the authors.

    Also, three of the eight large and high quality conventional medical trials tested drugs that were deemed to be “effective” and yet, these medical treatments have been withdrawn from medical use due to the serious side effects that later research confirmed. I was also pleased that the Rutten/Stolper article made note of the fact that Shang acknowledged that their study disregarded adverse effects (how convenient).

    Four (!) of the 21 high quality homeopathic trials sought to evaluate the prevention or treatment of muscle soreness. These three of the four trials had negative results, and if all of these trials were omitted from the analysis, there was a highly significant difference between homeopathic treatment and the placebo (P<0.007). All of the conventional medical studies that evaluated this condition found negative results, though none (!) of these studies were deemed by Shang to be “high quality,” thus further skewing the results.

    The new re-analysis of the Shang review did not question the outcome data that was extracted from the clinical trials. However, the authors did note that Shang reported on a study of traumatic brain injuries by Chapman, which found that reported on only one outcome measure as “negative” even though this study reported that 2 of the 3 outcome measures were “positive”. Also, the Shang review surprisingly included a “weight-loss” study, and Shang extracted data from day 1, but day 2 had been defined as the main outcome parameter (whereas the day 1 results were “negative,” the day 2 results were “positive” (in any case, this study should not have been included in the analysis because they had never undergone previous preliminary trial to deem their worthy of a larger clinical trial).

    In addition to all of the above serious concerns about the data report, let’s assume that the Shang paper was perfect. Although Shang’s paper asserts that the effect from homeopathic treatment is very small, even their skewed data show that the odds ratio (OR) from the 8 large and high quality homeopathic trials found an effect of OR = 0.88, which was the same as a meta-analysis of statin treatment and the occurrence of haemorrhagic stroke.

    The two new reanalyses of the Shang review of homeopathic research provide the old cliché, GIGO. Junk data indeed creates junk science which creates junk and meaningless results.

    Finally, a press release from the Lancet in 2005 when the Shang article was published quoted from one of its senior editors, Zoë Mullan, who acknowledged an inherent conflict on the part of the authors: “Professor Eggers stated at the outset that he expected to find that homeopathy had no effect other than that of placebo. His ‘conflict’ was therefore transparent. We saw this as sufficient.”

    It was ethically sound for Eggers and team to acknowledge their assumptions and prejudices prior to submitting the article, and therefore, it was the duty of the Lancet and its editors to hold the authors to a high standard and to confirm that these biases didn’t creep into their article. Here is where the Lancet failed good science and good journalism.

    The Shang team has a known history of skepticism against homeopathy. They were neither a good or reasonably objective source for this analysis.

    I still stand by my previous statement that the Shang paper has been blown out of the water…and I am not the only one above who now asserts this.

  14. daijiyobu says:

    Dr. G. wrote:

    “homeopathic remedies are [...] nothing but water and their efficacy only exists in the minds of homeopaths who are [...] masters of magical thinking.” Here, here.

    I’m a little confused [wink-wink!!!]:

    in that North American naturopathy — the FNPLA-NABNE-AANP-CAND kind — has mandatory clinical science exams in homeopathy

    (see http://www.bastyr.edu/bookstore/order/books.asp?item={9EBC35E3-2354-467C-9885-12CFC09BFA0D}&catid=10 , http://www.nabne.org/nabne_page_23.php ),

    all taught by schools that are regionally accredited, state- and province- sanctioned, USDE and Canadian such and such federally accredited

    that clearly state that homeopathy is hugely “science based”, as naturopathic education is — supposedly.

    I don’t see how entities labeled “universities” could get away with such deception.

    It doesn’t make sense [aka, they have a lot to answer for].

    -r.c.

  15. Stu says:

    There is no scientific reason for rejecting the possibility that water can store information.

    Of course not. There’s also no reason for rejecting that it contains pixie dust and quantum-sized fairies. Or that the world was created by the Great Green Arkleseizure.

  16. There’s also the issue of whether water can store information in the manner necessary for homeopathy to work, which is a different question than the general question of whether water can store information. Whatever “memory” water might have, it is far too short to be useful to transmit chemical information that can have a therapeutic effect–absent, of course, a whole lot of mumbo-jumbo and magical thinking needed to attribute mystical powers to properly succussed water.

  17. pmoran says:

    Ullman: “Four (!) of the 21 high quality homeopathic trials sought to evaluate the prevention or treatment of muscle soreness. These three of the four trials had negative results, and if all of these trials were omitted from the analysis, there was a highly significant difference between homeopathic treatment and the placebo (P<0.007). ”

    Right on! Have you considered what P values you could get if you omitted ALL inconvenient data? :-)

    What makes this kind of overview of the clinical evidence especially damning is that trials of homeopathy are generally performed in settings where at least some homeopaths think they are getting their best results.

    The resemblance of homeopathy to placebo is heightened by the lack of consistent demonstrable effects of homeopathic remedies on any biochemical or physical disease process, even after two hundred years of trying. You might expect such a supposedly potent and different form of medicine to perform in unusual, or even unique ways, not within the humdrum spectrum of possible placebo influences.

    Mr Ullman, why the difficulty in accepting that homeopathy is based upon placebo and all the other incidental influences of caring medical practice? These have sustained other unfounded medical theories, including many that mainstream medicine has embraced at one time or other. Surely you don’t find the science (?) of homeopathy probable.

  18. Acleron says:

    By Dana Ullman: “The Shang team has a known history of skepticism against homeopathy. They were neither a good or reasonably objective source for this analysis.”

    And why could you draw this conclusion, could it just be that they have analysed the data and it refutes your viewpoint? And why should we listen to someone who complains that properly conducted studies that show no difference between placebo and magic water are wrong because they are not individualised yet touts dubious studies that show an effect, such as the COPD study you run past unsuspecting audiences, as proof although they are similarly not individualised?

    Shang et all have shown that they are sufficiently well versed in biostatistics to have investigated any number of dubious medical claims. They do not rely on the refutation of homeopathy to make money. Can you claim a similar lack of conflicting interest?

  19. daijiyobu says:

    pmoran asked DU:

    “why the difficulty in accepting that homeopathy is based upon placebo and all the other incidental influences of caring medical practice?”

    another question to ask DU:

    “what kind [quality] of evidence would be necessary to dispel your faith in homeopathy?”

    health sectarians bithley ignore what refutes their a priori doctrine[s; moving the goal posts everywhere]

    – in the sense of ‘sectarian’ per Popular Science Monthly 1890,

    “for the sake of keeping within [defending, in the sense of apologetics] the dogmatic lines that fence round some particular creed” —

    and rarely if ever even blink at contrary-to-belief outcomes.

    -r.c.

  20. DanaUllman says:

    No one to date has responded in a substantative manner to the issues that I have raised above.

    It is so easy to “disprove” homeopathy when you purposefully ignore some studies, when you define “high quality” studies but don’t give the details of why certain studies are not defined as such, when you select the primary outcome measure that is different from the primary outcome measures of the study, when one defines a weight-loss study as a “high quality” study just because the researcher used large numbers of subjects (using a homeopathic drug that is rarely, if ever, used by professional homeopaths in the treatment of weight loss!), and when you choose to ignore the evidence that shows that conventional drugs “work” but kill the patient (but heck, the trick here is to only evaluate “results” in a limited time frame and define “results” only by the primary outcome measure and then specifically say that the evaluation of side effects is not the purpose of the study (hmmm, how convenient).

    Garbage in, garbage out…and you’d think that people at THIS site would provide critique of such junk science…as yeahsurewhatever has done (is he/she the honest one here?).

  21. _Arthur says:

    About the persistence of the “memory of water”; I notice that homeopatic remedies are stored for long periods of time, and still expected to work perfectly. Or the magic water is poured into lactose pills, and still expected to have the same magical effect.

  22. Garbage in, garbage out…and you’d think that people at THIS site would provide critique of such junk science

    Mr. Ullman appears to be admitting that studies of homeopathy are garbage. :-)

  23. pmoran says:

    “No one to date has responded in a substantative manner to the issues that I have raised above.”

    Post hoc manipulation of data is a no-no, and you should know that. The results of this study stand as another bit of negative evidence concerning homeopathy, unless you can show that the methods used to select studies were faulty at the planning stage, deliberately designed to introduce bias, or adjusted after the results were known (as you try to do).

    They are also in accord with what one would expect from any eye-balling of the at best very inconsistent record of homeopathy in clinical trials. They are also consistent with the results of other metanalyses, as you must well know.

  24. Harriet Hall says:

    Dr. Edzard Ernst has responded in a substantive manner to all the claims for homeopathy. A professor of complementary medicine who used to practice homeopathy and who has spent the last 15 years evaluating the scientific evidence for alternative treatments, he is familiar with all the published evidence for homeopathy. He says “With respect to homeopathy, the evidence points towards a bogus industry that offers patients nothing more than a fantasy.”

    The experiments that Mr. Ullman is most attached to, the basophil degranulation experiments, do not actually even tend to support homeopathy. They directly falsify two of homeopathy’s principles: that like treats like and that the greater the dilution the greater the effect. The apparent effects varied erratically with successive dilutions, and the dilutions produced the same effect as the original solution instead of the opposite effect.

  25. Mojo says:

    Harriet Hall wrote:

    “…the dilutions produced the same effect as the original solution instead of the opposite effect.”

    Actually, I don’t think that contradicts “like cures like”, at least as many current homoeopaths explain it. They often claim that the remedies stimulate the body’s “vital force” or what have you to fight the disease by inducing a similar response to the symptoms being exhibited. In any case, “provings” are invariably carried out using potentised remedies, generally at 30C as recommended by the prophet Hahnemann.

    It’s all still nonsense, of course, in view of the overwhelming evidence that the remedies don’t actually do anything. See for example: http://www.ncbi.nlm.nih.gov/pubmed/14651731

  26. DanaUllman says:

    Dr. Gorski…one would expect a moderator at THIS site to have a healthy scientific attitude and would try to maintain a healthy and intellectual dialogue. I urge you to try harder to maintain higher standards.

    My concern about GIGO in the Shang analysis was the garbage selection of choosing which homeopathic and which allopathic trials to include (and exclude), their black-box determination of the point score for the “quality” of each trial, and other problems with their “comparison” mentioned above.

    PMORAN mentioned that post hoc analysis is a NO-NO, and I fully agree. The Rutten/Stolper paper shows that the Shang paper seemed to use a post hoc analysis as evidenced by their selective determination of their definition of “large” clinical trials was different between homeopathic and allopathic trials.

    As for Harriett Hall’s statement about previous meta-analyses…below are some previous meta-analyses that have shown a positive result towards homeopathy:

    Vickers AJ, Smith C, Homoeopathic Oscillococcinum for preventing and treating influenza and influenza-like syndromes (Cochrane Review) The Cochrane Library, Issue 4, 2005. This review found “promising” results from four large placebo-controlled clinical trials in the treatment of influenza-like syndrome, though not in the prevention of this disease.

    J. Jacobs, WB Jonas, M Jimenez-Perez, D Crothers, Homeopathy for Childhood Diarrhea: Combined Results and Metaanalysis from Three Randomized, Controlled Clinical Trials, Pediatr Infect Dis J, 2003;22:229-34. This metaanalysis of 242 children showed a highly significant result in the duration of childhood diarrhea (P=0.008).

    WB Jonas, RL Anderson, CC Crawford, et al., “A Systematic Review of the Quality of Homeopathic Clinical Trials, BMC Complementary and Alternative Medicine 2001;1:12.
    59 studies met the authors’ criteria, 79% of which were from peer-review journals. When a homeopathic medicine was compared with a conventional drug, the probability of a positive outcome was significantly higher than when a placebo control was used (p<.0001). These studies were compared with a random sample of articles from JAMA and NEJM.

    K. Linde, N. Clausius, G. Ramirez, et al., “Are the Clinical Effects of Homoeopathy Placebo Effects? A Meta-analysis of Placebo-Controlled Trials,” Lancet, September 20, 1997, 350:834-843. Even critics have called this meta-analysis “completely state of the art.” It reviews 186 studies, 89 of which fit pre-defined criteria for its meta-analysis. Homeopathic medicines had a 2.45 times greater effect than placebo.

    J. Kleijnen, P. Knipschild, G. ter Riet, “Clinical Trials of Homoeopathy,” British Medical Journal, February 9, 1991, 302:316-323. This is the best objective meta-analysis of clinical research prior to 1991. This meta-analysis reviewed 107 studies, 81 of which showed efficacy of homeopathic medicines. Of the best 22 studies, 15 showed efficacy.

    M. Wiesenauer, R. Ludtke, “A Meta-analysis of the Homeopathic Treatment of Pollinosis with Galphimia glauca,” Forsch Komplementarmed., 3(1996):230-234. This is a meta-analysis of seven randomized, double-blind placebo-controlled trials and four non-placebo controlled trials, representing a total of 1,038 patients. These studies found that patients given homeopathic doses of Galphimia glauca for hayfever experienced 1.25 times greater improvement in eye symptoms when compared with those given a placebo. This success rate is comparable with the success rate experienced with antihistamines, but the homeopathic medicine has no known side effects.

    J. Barnes, K.L. Resch, E. Ernst, “Homeopathy for Post-Operative Ileus: A Meta-Analysis,” Journal of Clinical Gastroenterology, 1997, 25: 628-633. (This meta-analysis found statistical significance, p<.05, in favor of homeopathy for the time to first flatus for patients with post-operative ileus.)

    As for the basophil studies, please see:
    Belon P, Cumps J, Ennis M, Mannaioni PF, Roberfroid M, Ste-Laudy J, Wiegant FAC. Histamine dilutions modulate basophil activity. Inflamm Res 2004; 53:181-8. Four independent laboratories, each associated with a university, conducted a series of experiments using dilutions of histamine beyond Avogadro’s number (the 15th through 19th centesimal dilution, that is 10 -15 to 10 -19. The researchers found inhibitory effects of histamine dilutions on basophil degranulation triggered by anti-IgE. A total of 3,674 data points were collected from the four laboratories. The overall effects were highly significant (p<0.0001). The test solutions were made in independent laboratories, the participants were blinded to the content of the test solutions, and the data analysis was performed by a biostatistian who was not involved in any other part of the trial.

    Once again, I urge people on this list to try to maintain a scientific attitude towards homeopathy. There is a lot more research than people realize.

  27. Harriet Hall says:

    Mr. Ullman has not responded to my points.

    Yes, anyone can find positive meta-analyses for homeopathy. One can find a lot more negative ones. The books Snake Oil Science and Trick or Treatment explain why science doesn’t accept the kind of evidence Ullman and other homeopaths offer. The book Homeopathy: How it Really Works, by Jay Shelton, is equally revealing: it concludes that homeopathy “works” but its effects have nothing to do with the remedies. Placebos “work.” Ex-homeopaths like Ernst and Betz have stopped believing in homeopathy for very compelling reasons.

    The basophil experiments are fatally flawed and could not be reproduced under proper blinding conditions with independent observers. They tend to disprove basic homeopathic tenets; the fact that advocates cite them as proof of homeopathy is evidence of their belief-induced blindness.

  28. DanaUllman says:

    Harriet, In due respect, the Ennis trials were replicated at four university laboratories…with substantially significant results. What is also true is that a 5th lab separately sought to replicate it and was unable to do so. How or why you would say that Ennis’ experiments were “fatally flawed” is simply evidence of your personal disbelief in homeopathy than any rational or scientific evidence.

    I personally believe that skeptics of homeopathy are much more metaphysical than I am. They believe that a placebo can successfully treat cats, dogs, horses, pigs, cows, and the hundreds of other animals who are regularly treated by homeopaths and by veterinarians…

    It is ironic that Ernst disbelieves in homeopathy but his wife doesn’t! You gotta love that one…

    Speaking of Ernst, did you read his one and only randomized double-blind and placebo controlled trial in the treatment of varicose veins? Yeah…it tested a homeopathic medicine and it showed benefit from the homeopathic medicine. But I guess Ernst doesn’t believe his own science.

  29. Harriet Hall says:

    There were many reasons I called the experiments fatally flawed. They are unnecessarily complicated and obscure what is actually being tested. They depend on an observer’s judgment of which basophils look degranulated rather than on a more objective endpoint like histamine concentration. Why didn’t they set up a simpler experiment where a measured quantity of allergen causes a measurable release of histamine, then show that pre-treating the cells with a homeopathically dilute solution of the allergen measurably reduces histamine release? Or do a low-tech human study of patients with hay fever and measure eosinophils in nasal secretions with and without a homeopathic remedy made by diluting the pollen? All that requires is having patients sneeze onto Saran wrap and counting the eosinophils under the microscope. I can think of any number of more straightforward experiments. The basophil/antiIgG/histamine/degranulation/inhibition/feedback setup is almost too complex to keep track of what’s supposed to happen. It still confuses me, but it sounds like a homeopathically dilute solution is having an effect similar to an undiluted preparation. If that’s so, it directly contradicts homeopathic theory.

    And you have not explained how the varying strength of effect with consecutive dilutions fits with homeopathic theory that more dilute solutions are more effective. When the effects rise and fall with subsequent dilutions, how would a homeopath ever be able to choose the correct dilution?

    Yes, the experiment was replicated at different universities by believers, but it lost all credibility when attempts at replication under independent observation failed. Both in Benveniste’s lab and in the Horizon TV trial, when outside observers enforced proper blinding procedures, the replication failed. Even with the stimulus of a million dollar prize, no one has succeeded in replicating this study under careful observation. It seems abundantly obvious that something went wrong in the lab; in fact it is practically certain that in one series of experiments only one technician was able to get positive results and that she was not properly blinded.

    I’m not going to bite on the placebo/animals question. Vets have explained that to the satisfaction of most of us.

    As for Ernst not believing his own science, good scientists don’t believe their own science. They test it incessantly, trying to falsify it and looking for flaws. Ernst thought he saw evidence for homeopathy, but then he saw much more evidence against it and he was able to rise above his own experience and beliefs and look at the totality of evidence objectively. He was able to change his mind based on the evidence. Not everyone can do that.

    Mr. Ullman, we can tell you what it would take to convince science that homeopathy works; what would it take to convince you that it doesn’t? Unless you can answer that question convincingly, we are wasting our time discussing this any further with you.

  30. Mojo says:

    “K. Linde, N. Clausius, G. Ramirez, et al., “Are the Clinical Effects of Homoeopathy Placebo Effects? A Meta-analysis of Placebo-Controlled Trials,” Lancet, September 20, 1997, 350:834-843. Even critics have called this meta-analysis “completely state of the art.””

    Although in the light of their 1999 analysis of the same data it appears that Linde et al don’t entirely share this opinion.

  31. Mojo says:

    “They believe that a placebo can successfully treat cats, dogs, horses, pigs, cows, and the hundreds of other animals…”

    How many of these animals have reported an improvement in their condition?

  32. mckenzievmd says:

    Speaking as a veterinarian, I’ll address the placebo-in-animals question. Mof the evaluation of symptoms for my patients comes from the owner’s subjective assessment or my own. If it is not clearly and objectively measurable as an outcome, then it is susceptible to placebo-by-proxy effects equivalent to the effects seen in humans for subjective outcome measures.

    In addition, classical conditioning effects certainly can influence even objective outcome measures in animals irrespective of the real physiological effects, if any, of the treatment itself. Blood pressure, blood glucose levels, gastric acid csecretion, and many other physiological parameters have been shown to respond to classical conditioning to the point where placebo treatment can generate a measurable response.

    The evidence is lower in volume and quality for veterinary use of homeopathy than is the case in human medicine, but as usual the best quality studies show no significant benefit over placebo. Vetrinary medicine does NOT provide any sound evidence aginst the claim that homeopathy in uhmans is purely a placebo, and Mr. Ullman’s suggestion that it does is a misrepresentation.

    For further reading, you can start with:
    Ramey, D., Rollin, B., Complementary and Alternative Veterinary Medicine Considered, Iowa State Press, 2004

  33. DanaUllman says:

    Harriet, Excuse me! But referring to Ennis’ experiments as studies by “believers” is not accurate. You either chose to state a purposeful lie or you are misinformed (I sense and hope it is the latter…but such is the problem when you only read literature from fundamentalists who have axes to grind).

    Furthermore, the follow-up trial that had a “negative” result was conducted by “believers” (I encourage you all to review the research conducted by the lead researcher of this negative trial, Stephen Baumgartner, who has conducted an interesting body of research on plants). But heck, plants respond to placebo too, don’t they?

    And I cannot help but notice that no one has responded to most of my concerns about the Shang “study”. Waiting for Godot here.

    As for Linde and Jonas, they wrote a stinging critique of the Shang paper, and they stand by their original work that shows that a placebo response is an inadequate explanation for the effects of homeopathic medicines. Their 1999 “letter” simply said that based on newer studies, the significance of their 1997 meta-analysis was reduced but NOT negated. But heck, Mojo, you only believe what you want to believe (the sign of bad scientific thinking).

    As for what it’ll take me to disbelieve in homeopathy…first, good science must have both internal and external validity. Virtually ALL of the LARGE clinical trials that were used by Shang in his final analysis had no external validity (how convenient!). That weight-loss study was a classic.

    Perhaps someone can also explain to me on fact of medical history: The primary reason that homeopathy became popular in the 19th century was the remarkable results that homeopathic physicians experienced in treating people suffering from the infectious disease epidemics that raged at the time. Epidemics of cholera, scarlet fever, typhoid, and yellow fever were rampant and killed large numbers of people who became ill with them. And yet, death rates in homeopathic hospitals were commonly one-half or even one-eighth of the death rates in the conventional medical hospitals.

    Skeptics of homeopathy ARE much more metaphysical than I am and believe that placebo treatment is effective enough to have to add impressive statistical differences.

  34. TsuDhoNimh says:

    To answer Dana, I’m just going to blatantly pimp my own article on the topic:
    http://www.associatedcontent.com/article/1096182/beginners_guide_to_homeopathy_and_homeopathic.html?cat=5

    In an age when physicians prescribed near-fatal doses of lead, arsenic and mercury compounds, Hahnemann prescribed what was (and still is) water. Instead of giving his patients strong laxatives to purge them of the supposed imbalances of the humours that were causing their illness, and instead of bleeding off pints of blood, he insisted on bed rest, a light nourishing diet and plenty of liquids. He was providing what is now known as “supportive care” and unlike other physicians of his day, was not killing them with poisons, dehydration, and blood loss. Compared to the standard treatments of his day, homeopathy worked miracles.

    Medicine has changed, Dana, and homeopathy has not changed with it. It’s forced to invent reasons why it works, in the face of evidence that it doesn’t work.

    Would you be willing to trust your life to a homeopathic therapy for yellow fever, cholera or even typhoid? How about rabies? Rattlesnake bites?

  35. David Gorski says:

    Compared to the standard treatments of his day, homeopathy worked miracles.

    Exactly. “Allopathy” in the early 1800s was brutal and non-science-based. Laxatives, purgatives, treatment with toxic heavy metals like cadmium, arsenic, and, yes, mercury, as well as bleeding were the order of the day. By comparison, an inert treatment like water could easily produce better results.

    Things have changed, though, in 200 years. Homeopathy is still water, but we now have effective treatments for many illnesses.

  36. Harriet Hall says:

    Notice how Mr. Ullman ignores everything I said about the basophil experiments except the word “believers.”

    If he believes the experiments were valid, he must accept that they falsify homeopathic principles. If he believes they were not valid, he ought to stop citing them.

    I can’t believe that any intelligent, informed person in 2008 is still using death rates in 19th century hospitals as evidence that homeopathy works. It is disingenuous of Ullman to pretend that someone needs to explain it to him. I KNOW this has been explained to him many, many times.

    I’m willing to engage in a rational scientific discussion of the quality of evidence of homeopathy experiments, but if all Ullman wants to do is use rhetorical tricks to propagandize his belief system I don’t care to humor him.

  37. TsuDhoNimh says:

    If the species of the snake is known the following medicines can be used:
    * Cobra: Ammonia Carb 1M, Acid Hydrocyanic 30C or higher
    * Rattle Snake: Crotalus Hor 30C, 1M, Plantago Q, 1M
    * Viper: Camphor Q

    Hmmm, my neighbor is a herpetologist, if Dana wants to put his butt on the line for his beliefs.

  38. pmoran says:

    Dana, what is the significance of this comment from Linde et Al’s original meta-analysis?

    “However, we found insufficient evidence from these studies that homeopathy is clearly efficacious for any single clinical condition. ”

    I think you are so desperate to vindicate homeopathy as to be selective and somewhat deceptive in what you choose to put forward. You also clutch at straws of evidence that leave homeopathic effects beyond placebo well within the range of known experimental error, artefact, and interpretation errors such as those that can apply to meta-analysis, an analytical tool that was designed for totally different purposes.

    As an example of your bias, you refer to some dubious and inconsistent in vitro work without mentioning the fact that Benveniste himself was unable to get consistent results from such studies, despite many desperate years of trying and even designing automated machines to try and get more consistent results. I suspect every surface in these laboratories can become layered with various biologically active chemicals, sufficent to wreak havoc with the sensitive and unstable biological systems used.

    Before he died, Benveniste even cautioned another group dabbling in homeopathy research, in 2003: “This is interesting work, but Rey’s experiments were not blinded and although he says the work is reproducible, he doesn’t say how many experiments he did,” he says. “As I know to my cost, this is such a controversial field, it is mandatory to be as foolproof as possible.” Hardly the words of someone who has confidence that homeopathic effects can be demonstrated in the laboratory.

    .

  39. DanaUllman says:

    Harriet, I cannot answer for researchers on why they did or didn’t do something in their basophil trials, but the point of these trials was not to confirm the homeopathic principle of similars but to simply show that homeopathic doses have a biological effect that IS different than water…and the Ennis trials showed this.

    A group of researchers at the University of Glasgow conducted four randomized double-blind clinical trials on various atopic conditions, two of which were published in the Lancet and one in the BMJ. Editorials expressed in each of these issues acknowledged the high quality nature of these trials, and each trial found substantial significance.

    And yet, conveniently enough, Shang didn’t define their trials as “high quality,” even though several other reviews (and the editors of the Lancet and BMJ) have asserted otherwise.

    I will tell you why these studies were included…they would have changed the result of the Shang analysis from negative to positive. Yeah…it is that simple.

    A semi-replication of this trial was conducted by Lewith, and although there were significant differences in this trial and even though the primary outcome was not significant, the authors note that in reviewing of symptoms of the control and treatment groups, there WERE differences.

    My question to you is: Why do you ignore the high quality data?

    As for 19th century epidemics…conventional docs have always (!) asserted that their therapies were not scientific in the past but they are scientific now. The problem is that the time frame of “now” has been every day for the past 150 years.

    Do people out there still believe 40 years from now that we will call today’s medicine as “scientific”? If so, what drugs are in regular use today that were in use in 1968?

  40. David Gorski says:

    Do people out there still believe 40 years from now that we will call today’s medicine as “scientific”?

    Yes.

    Do we call the medicine of 1968 unscientific? No. Usually we do not. Its scientific basis may not have been as systematized as the evidence-based medicine movement has become, but drug development and most treatment developments were based on science.

    If so, what drugs are in regular use today that were in use in 1968?

    Man, you’re ignorant about some very basics. Here are some that I can think of right off the top of my head. There are many more that the other docs here can probably remind me of. A lot of these are antibiotics and cancer chemotherapeutic agents, but that’s just because that’s what I know better:

    5-fluorouracil
    methotrexate
    aspirin
    codeine
    papaverine
    naloxone
    digoxin
    doxorubicin
    vincristine
    penicillin
    ampicillin
    many other penicillin-type antibiotics
    first generation cephalosporin antibiotics
    erythromycin
    tetracycline
    doxycycline
    spironolactone (not 100% sure if this was in use in 1968)
    prednisone
    hydrocortisone
    cisplatinum
    cyclophosphamide
    propranolol
    morphine
    fentanyl
    demerol
    valium
    lidocaine
    bupivicaine
    halothane
    ibuprofen

    Oh, I give up. I’m tired. I could list quite a few more, but you get the idea. There are lots and lots of drugs that were in use in 1968 that are still in use today. Science-based medicine does not abandon a useful drug unless and until a better drug comes along. Heck, it still finds aspirin useful and has found lots of new uses for it other than relieving pain and fever. Moreover, such drugs are off-patent and therefore cheaper than newer drugs.

    As for “unscientific then but not now,” we’re not talking about the 1960s. We’re talking about the 1800s. Science- and evidence-based medicine didn’t really begin to take hold until the 1900s, although its origins go back to ship’s surgeon in the British Royal Navy, James Lind, who did the original study of how fresh citrus fruits could ward off scurvy, and even earlier. However, it took a long time for medicine to become really scientific because the basis wasn’t there yet. Heck, well into the 1800s doctors were still referring to “imbalances of humors.” It took the scientific breakthroughs of the latter half of the 19th century, such as germ theory, to lay the groundwork of scientific understanding necessary for science-based medicine to really start to develop. Homeopathy, in marked contrast, changed little, if at all, in response to the scientific revolution in medicine of the late 1800s and early 1900s.

    But why am I wasting my time with you? You’ve been told the same things over and over again, and you keep regurgitating the same canards. After you’ve been beat up (metaphorically speaking) on one blog, you go to another with the same bogus arguments. When you get beat up there, you move on. And so on.

  41. Harriet Hall says:

    the point of these trials was not to confirm the homeopathic principle of similars but to simply show that homeopathic doses have a biological effect that IS different than water…and the Ennis trials showed this.

    IF they showed anything, it was that homeopathic doses have a biological effect that is inconsistent with homeopathy. But they did NOT show anything because they could not be replicated with outside observers making sure proper blinding procedures were followed.

    A semi-replication of this trial was conducted by Lewith, and although there were significant differences in this trial and even though the primary outcome was not significant, the authors note that in reviewing of symptoms of the control and treatment groups, there WERE differences.

    You have GOT to be kidding!! This is too ridiculous to even deserve a comment.

    Your comments about scientific medicine and which drugs were used in 1968 are either a deliberate attempt to distract us with rhetoric or they show an appalling misunderstanding of the scientific process. Besides which, you are two-faced in trying to use science to validate homeopathy and simultaneously trying to discredit science when it pertains to conventional medicine. The readers of this blog can see right through you. Again, no comment is needed.

  42. TsuDhoNimh says:

    Dana –
    You didn’t answer my question: If you had typhoid fever,malaria, rabies or cholera, would you use nothing but a homeopathic remedy and the supportive care available in the 1900s? Specifically, no O2, no antibiotics, no IVs fluids (you can have clysters). If they were effective then, would you be willing to bet your life on it today?

    Starting in the 1800s, when quinine and morphine were isolated, and when the germ theory of disease was elucidated, medicine shifted from the old-style to the scientific method. It didn’t happen overnight, but the medicine of my dad’s pre-antibiotics era was scientific within the limits of what they knew.

    Here’s what scientific medicine does: it will discard an idea that doesn’t hold up in testing.

  43. overshoot says:

    Pec:

    There is no scientific reason for rejecting the possibility that water can store information.

    Of course not. Whales and other marine life have been taking advantage of water storing information for millions of years. More recently the world’s navies have done likewise.

    That doesn’t even go near the question of ice sculpture, which can store quite a bit of information.

    However, as a fallen physicist (read: engineer) I do have to ask about the usual issues in information storage such as: How much energy does water need to store one bit of information? How many bits does it take to encode the difference between camphor and nat mur? Is there some sort of error-correction function, and how much power does it use? How often does it scrub errors?

  44. HCN says:

    Tsu Dho Nimh said “Dana -
    You didn’t answer my question: If you had typhoid fever,malaria, rabies or cholera, would you use nothing but a homeopathic remedy and the supportive care available in the 1900s? ”

    I suspect he would, I tried to ask him for evidence that homeopathy works better for rabies than homeopathy (because Andre Saine claims it does), and I got this:
    http://www.sciencebasedmedicine.org/?p=35#comment-968 … “Andre Saine’s forthcoming book, The Weight of Evidence, will probably provide us all with this evidence about rabies. ”

    Oh, and continue reading the thread, it is quite amusing.

  45. “A group of researchers at the University of Glasgow conducted four randomized double-blind clinical trials on various atopic conditions, two of which were published in the Lancet and one in the BMJ. Editorials expressed in each of these issues acknowledged the high quality nature of these trials, and each trial found substantial significance.”

    We’ve come full circle here on SBM. Those four trials were done by homeopath David Reilly, whose words introduced my first posting: http://www.sciencebasedmedicine.org/?p=11

    As I wrote then, they were “small studies of homeopathic treatments of hay fever, asthma, and allergic rhinitis, the outcomes of which [were] inconsistent and largely subjective.” A commentary (editorial) by Andrew Vickers, accompanying the final paper, agreed:

    “Are they correct to argue that they have reinforced the evidence that homoeopathy is more than a placebo? The current trial is the fourth in which this group evaluated a similar treatment, comparator, patient group, and outcome measure. As with the previous studies, the primary outcome used to calculate the sample size was a visual analogue score measuring patients’ perceived improvement in symptoms. In contrast to the earlier studies, they detected no effect of homoeopathic treatment on the visual analogue score. These data do not strengthen the conclusion that homoeopathy differs from placebo. In fact, the effect of including the current study in their meta-analysis with data from the three earlier trials is to weaken (though not overturn) this conclusion.” (see: http://www.ncbi.nlm.nih.gov/entrez/utils/fref.fcgi?PrId=3494&itool=AbstractPlus-nondef&uid=10948025&db=pubmed&url=http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubmedid=10948025 )

    Vickers subsequently asserted that “Clinical trials are particularly important in homoeopathy as they are nearly the only evidence that treatment can have effects different from placebo,” and concluded that “larger trials are needed.” He was exactly wrong about that, of course; he suffers from the EBM “trials trump science” fallacy, as does anyone who is duped by the false dichotomy that Reilly presented when he said “Either homeopathy works or controlled trials don’t!” The infinitesimal prior probability predicts exactly the equivocal results that have been the norm in homeopathy trials for decades. Thus the jury is in, and homeopathic preparations have been found guilty of being exactly what science predicts: nothing.

    No more trials are needed. To borrow a phrase, “it’s time to close the books on homeopathy.”

  46. Mojo says:

    Dana wrote, “As for Linde and Jonas, they wrote a stinging critique of the Shang paper,”

    They said something about their 1997 paper in the same letter, didn’t they. Now what was it? Ah yes, I remember: “Our 1997 meta-analysis has unfortunately been misused by homoeopaths as evidence that their therapy is proven.”

    “…and they stand by their original work that shows that a placebo response is an inadequate explanation for the effects of homeopathic medicines. Their 1999 “letter” simply said that based on newer studies, the significance of their 1997 meta-analysis was reduced but NOT negated.”

    By the 1999 “letter” I assume you mean Linde, Scholz, Ramirez, Clausius, Melchart and Jonas: “Impact of Study Quality on Outcome of Placebo-Controlled Trials of Homeopathy, J Clin Epidemiol vol 52 No 7, pp 631-636 (that’s certainly the one I was referring to).

    The 1999 paper didn’t actually include any new research in its analysis: it was a reanalysis of the same data as the 1997 paper. What they actually said (p. 635) was that “the evidence of bias” found in their reanalysis of the same studies covered by their 1997 paper “weakens the findings of our original analysis”. They then went on to say in addition that “since we completed our literature search in 1995″ new research had been published, and that a number of the “high-quality” trials have negative outcomes. This, together with their 1998 paper on classical homoeopathy, seemed to “confirm the finding that more rigorous trials have less promising results”, and they suggest that their 1997 paper “at least overestimated the effects of homeopathic treatment”.

    Not exactly promising for homoeopathy.

  47. shpalman says:

    Can I just pimp my blog with respect to your mention of Lionel Milgrom? http://shpalman.livejournal.com/tag/lionel+milgrom

    Thanks.

  48. Badly Shaved Monkey says:

    “As for what it’ll take me to disbelieve in homeopathy…first, good science must have both internal and external validity. Virtually ALL of the LARGE clinical trials that were used by Shang in his final analysis had no external validity (how convenient!).”

    Pray tell, Mr Ullman, what was the “external validity” of Ennis & Co’s strangely fragile basophil experiments?

    Do you find that the rapidity with which you shift your goalposts gives you whiplash? I certainly find their rapid and repeated oscillation nauseating.

  49. Badly Shaved Monkey says:

    Mojo,

    “They said something about their 1997 paper in the same letter, didn’t they. Now what was it? Ah yes, I remember: “Our 1997 meta-analysis has unfortunately been misused by homoeopaths as evidence that their therapy is proven.””

    It is typically around about now that DU bails out: once his misrepresentations have been pulled together in one concise post.

    Which is a pity, because I’d like to hear him sing me a song about how in vitro models, which seem to have been carefully selected for their susceptibility to experimenter bias and error, have anything to tell us about the effect of sugar pills in sick patients.

  50. Harriet Hall says:

    If you want to laugh hysterically, read the 50 facts about homeopathy at http://www.naturalnews.com/024512.html

    For instance: “Fact 20 – Homeopaths treat genetic illness, tracing its origins to 6 main genetic causes: Tuberculosis, Syphilis, Gonorrhoea, Psora (scabies), Cancer, Leprosy.”

  51. Wow. As of 15 Oct 2008 at 5:24 pm, Dana Ullman has been pwned. Then pwned moar.

  52. DanaUllman says:

    Linde is correct that his meta-analysis does not “prove” homeopathy, but that is like saying that a single meta-analysis of conventional medicine could not “prove” all of conventional medicine.

    Linde and Jonas were strongly critical of the Shang trials, and to date, no one has responded to the concerns I expressed above about the Shang “review.” Shang conveniently left out some studies, didn’t define the Reilly trials as “high quality” despite editorials in the Lancet and the BMJ asserting otherwise, leaving out a large polyarthritis study because it could not be “matched” (even though NONE of the final studies were matched in any way), and on and on.

    Some research after Linde’s 1997 paper may have reduced the significance of their conclusions but there is no solid statistical evidence that the significance disappeared.

    As for genetic illness…homeopaths were one of the earliest physicians to acknowledge genetic influences, and when one gets an infection, such as TB or syphilis, for instance, the genes that this person passes on ARE influenced by this infection. The concept of “miasms” in homeopathy is complex and worthy of further study.

    Have you seen this new study? Although it is not DBPC, it is interesting to note the difference in patients with chronic disease who were satisfied with the results of homeopathic vs. conventional treatment after 3 months. There are other results from this study that are worthy of notice too.
    http://www.biomedcentral.com/content/pdf/1472-6882-8-52.pdf

  53. pmoran says:

    “Some research after Linde’s 1997 paper may have reduced the significance of their conclusions but there is no solid statistical evidence that the significance disappeared.”

    But what does trivial statistical significance mean in medicine?

    If replicated by other studies it may well be enough to get a pharmaceutical onto the market, but that doesn’t assure an enduring place in medical practice, nor is it sufficient on its own to establish a scientific principle.

    The statistics are no more than a probabilistic rule of thumb enabling doctors to select (probable) best treatment methods. But mistakes occur regularly, through known biases including conscious and unconscious deceptions, publication bias, and even sheer chance.

    The conventional system works, over the long term, but only through an intimate integration of medicine with basic science and technology, and also though the sheer volume of clinical research performed. The former allows imputed clinical effects to be correlated with deeper physiological or biochemical activity, and vice versa on occasions. The latter means that ineffective or less effective treatments are regularly being discarded.

    The Linde results are meaningful to you because you seek vindication for something that you are very heavily invested in. We can look at the very same data and think “this treatment doesn’t seem to work very powerfully or consistently, the positive results are consistent with known biases, and the foundational hypotheses still lack support (to put it ridiculously mildly) — let’s move on to more promising fields.”

    It really is time to move on. However, I personally believe a case can be made for tolerating the regulated use of homeopathy wherever there is a strong tradition of it. As you say, it provides safe treatments for the innumerable minor and self-limiting conditions that the public seeks treatment for. There will be instances where it is safer and more cost-effective for a patient to try a homeopathic remedy rather than see a doctor and run the risks of over-investigation and over-treatment. It will be safer than many other “alternative” methods.

    There is little risk to this. Most of the public already knows that homeopathy is not a trustworthy stand-alone treatment for any serious condition.

  54. Badly Shaved Monkey says:

    I knew I heard a bell ringing somewhere with DU’s attempt inappropriately to stretch a definition of “high quality” from one context to another. Prior to being banned as an editor at Wikipedia he got into an awful tizz over this in relation to some other papers.

    Fortunately, the interweb doesn’t quickly forget these things;

    http://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Noticeboard/Archive_8

    “It appears, even from Dana’s comments, that “high quality” is not used by Linde in reference to Cazin. The “abstract” defines the quality rating “QE > 50″ as being “high quality”, but that characterization doesn’t appear in the paper. Furthermore, although this is a weak use of “WP:SYNTH, we are using the following facts, all from Linde:
    “QE > 50″ is defined as “high quality”.
    Linde states that he uses papers which satify “QE > 50″ and have no “serious methodological flaws”.
    Linde uses Cazin.
    to construct the statement that Linde considered Cazin “high quality”, providing evidence that Cazin is an WP:RS. I think the chain of reference is too weak, and, even if Linde considered Cazin “high quality”, it doesn’t necessarily make Cazin a WP:RS. — Arthur Rubin”

    As I recall, it appeared that Dana was stubbornly insisting on the inclusion of studies that he wanted to call “high quality” where they were neither blinded nor randomised. Homeopaths do seem to have a very special definition of quality so that “utterly useless” forms a valid subset of “high quality”.

    I see he now wants us to consider a customer satisfaction survey. For the sake of clarity, Dana, such surveys are “utterly useless” and not “high quality”.

  55. wilsontown says:

    Dana wrote: “PMORAN mentioned that post hoc analysis is a NO-NO, and I fully agree. The Rutten/Stolper paper shows that the Shang paper seemed to use a post hoc analysis as evidenced by their selective determination of their definition of “large” clinical trials was different between homeopathic and allopathic trials.”

    This is amusing. So Rutten and Stolper have managed to get a paper published that clearly demonstrates they didn’t carefully read the paper they are criticising! Well done to everyone involved in that shambles.

    The number of subjects in the trials deemed to be “larger” is different between the conventional medicine and homeopathy groups, because Shang et al. didn’t pick specific numbers. Their criterion for larger trials was, and I quote, “trials with SE in the lowest quartile”. So it is no surprise the numbers are different between the two groups. This is a sensible thing to do when you don’t want to set your criteria post-hoc, because you can’t know what the size of the trials you are studying is until you do the analysis. If you just picked a number, you could find that you don’t have any ‘larger’ trials to analyse.

    Now, I think we’re all aware by now that Dana hasn’t actually read the Shang study in any detail. But for this nonsense to get published in Homeopathy once again shows the standards of review in that “journal” to be very poor. This should be an embarassment, but you can bet that this same point will be brought up again and again by the Danas of this world.

  56. David Gorski says:

    Now, I think we’re all aware by now that Dana hasn’t actually read the Shang study in any detail.

    Actually, I think he did and just didn’t understand it.

  57. wilsontown says:

    “Actually, I think he did and just didn’t understand it.”

    Fair enough, that could be the case. What he certainly hasn’t done is thought critically about the homeopathic talking points that he keeps regurgitating. He hasn’t thought “I wonder if that’s right?” and gone back to the paper to check.

  58. Harriet Hall says:

    when one gets an infection, such as TB or syphilis, for instance, the genes that this person passes on ARE influenced by this infection

    Perhaps Dana is talking about epigenetic effects. Whatever he means, he certainly hasn’t shown us that homeopathy has anything to contribute to genetic or epigenetic influences on disease.

    Sure, homeopathy is popular, and in most cases it does no harm. I just wish we could accomplish two things:

    (1) Keep homeopaths from harming patients by discouraging effective medical treatment or offering useless homeopathic vaccines in lieu of real vaccines.

    (2) Inform patients that homeopathy is a belief system, not a scientific discipline.

    Homeopathy is to scientific medicine as astrology is to astronomy. If Dana said people liked reading their horoscopes and thought it helped them, I wouldn’t argue. If he tried to convince me that heavenly bodies had been scientifically shown to influence human personality and events, that would be another matter.

  59. Dr Aust says:

    Homeopathy amuses me, too, but I’m afraid I just can’t find it in me to argue with waterheads like D.Ullman any more. They are like the Energizer Bunny, or indeed like religious zealots, which is what they really are, when it comes right down to it. Do they refuse to argue on the evidence, continually blustering and shifting the goalposts? Do bears…. you get the idea. What’s that phrase about life being too short?

    Instead, I have decided that the future lies in relentless musical derision. So if you’ll forgive me for pimping my blog too, I give you… to the tune of that famous old sing-along “Supercalifragelisticexpialidocious” from Mary Poppins

    “Super calibrated shaking”.

  60. DanaUllman says:

    My favorite line above is Kimball Atwood’s “trials trump science fallacy.” Kimball seems to be one of the honest ones who acknowledges that no trials will change his views of homeopathy.

    I am still waiting for an adequate explanation for how and why the four trials by Reilly shouldn’t be considered “high quality” and why the polyarthritis study with 176 patients was conveniently ignored. Oh, I know why: it is the “trials trump science fallacy.” Some of you would prefer to define science as your own belief system and ignore anything that might disprove it…and yet, still maintain the chutzpah to assert that others are “unscientific.”

    The new re-analysis of Shang shows that Shang’s conclusions are totally dependent upon HIS definition of high quality (the details of which have never been made public), dependent upon his different definitions of “large” clinical trials for homeopathic and for conventional treatments, dependent upon whether external validity should be considered (Shang specifically has no interest in this important subject), and dependent upon whether one only wants to evaluate short-term results AND ignore any and all side effects (3 of the 8 conventional medical treatments that were deemed to have a “positive” outcome used drugs that have since been taken off the market…whooops!).

    Perhaps you all might consider the words and wisdom of Sir Michael Rawlins. Quoting from the Pharma Times:

    http://pharmatimes.com/Forums/forums/p/2532/2541.aspx#2541

    “The chairman of the UK’s National Institute for Health and Clinical Excellence (NICE) has suggested randomised controlled trials (RCTs) should no longer be seen as the be-all and end-all of clinical research.

    In a speech last night to the Royal College of Physicians, Professor Sir Michael Rawlins said such studies had been placed “on an undeserved pedestal”. He called for other types of research, including observational studies, to be given greater attention.

    Professor Rawlins presides over an organisation that has regularly indicated its discontent with clinical evidence supplied by drug manufacturers. For its part, industry has been vocal in its criticisms of NICE’s cost-effectiveness models. More recently, Professor Rawlins has sharply criticised industry pricing practices
    for new drugs.

    All the same, some may be surprised at his willingness to question the value of RCTs, generally seen as the most rigorous tests for a new medicine, and talk up the benefits of other types of study.”

    Rawlin’s complete presentation is at:
    http://www.rcplondon.ac.uk/pubs/contents/304df931-2ddc-4a54-894e-e0cdb03e84a5.pdf

    Wow…if scientists had to take observations studies seriously, they might actually have to take homeopathy seriously. Perhaps you will next say that “any evidence trumps science fallacy.” Maybe the real problem is how some people are choosing to define “science” as reductionism. Eeeks.

  61. wilsontown says:

    Dana, this is ridiculous. You keep spouting the same nonsense, even though you must know by now that you’re talking rubbish.

    “The new re-analysis of Shang shows that Shang’s conclusions are totally dependent upon HIS definition of high quality (the details of which have never been made public)”

    For the umpteenth time. The criteria for “higher quality” trials ARE CLEARLY STATED IN THE PAPER ON PAGE 728.

    “I am still waiting for an adequate explanation for how and why the four trials by Reilly shouldn’t be considered “high quality” and why the polyarthritis study with 176 patients was conveniently ignored”

    Having looked quickly at the 2000 Reilly paper in the BMJ, it seems that the methods of concealment of allocation are not clearly stated: this is one of the criteria for “higher quality” trials that were stated, again on page 728. The polyarthritis study was not included, because it could not have been included by the design of the meta-analysis. Of course, if you bothered to read the papers, you could find all these things out for yourself.

  62. DanaUllman says:

    Wilsontown, You (and others) seem to think that just saying something makes it true. Ahhh, if life were only that simple.

    I am not simply concerned about the criteria for high quality trials but what “score” the various trials got. Because you claim that there is transparency, what score did ALL of Reilly’s studies get?

    Further, I’m sure that the BMJ editors and Reilly would question Shang’s mis-assessment of the concealment of allocation (see page 472-473 in the 2000 paper). “The treatments were indistinguishable in packaging, taste, and smell…. The coded drug packages were sent to the pharmacy department of Glasgow Royal Infirmary where, to augment blinding, each one was recoded with a unique number according to the randomisation schedule and then delivered to the pharmacy department.” …and more info is provided.

    And thanx for acknowledging that the polyarthritis study was not included in the design of this study, despite the fact that it fit ALL of the high quality criteria of the study. Because NONE of the final high quality studies “matched” in any way, it is so convenient that it was left out because Shang couldn’t find a study that “matched” it (as though THAT mattered to Shang).

    According to the Lancet’s press release on the Shang study, Shang had previously alerted the Lancet that he was conducting this comparison and that he predicted that his results would show homeopathic treatment to be akin to a placebo. He was not the best or adequately objective individual to do this review, and the Rutten/Stolper article in the Oct, 2008, issue of Homeopathy shows the post-hoc analysis he made to make his data fit his conclusions. How convenient.

    Shang and team remind me of the Diebolt electronic voting machines that can and will change people’s votes post-hoc. All in the name of “science” and “democracy” (how conveniently perverted)…

  63. David Gorski says:

    And Dana’s nonsense continues…

  64. wilsontown says:

    In fairness to Dana, I should acknowledge that I missed that part of the BMJ paper during my quick skim-read. It’s possible that the paper was incorrectly categorised as not being of higher quality.

    Dana, I’m not sure what your problem is with a supposed lack of transparency. We know which studies were considered to be higher quality, because that information is available here. We know what the criteria for higher quality trials were, because they were clearly stated in the paper. That’s why we can have this conversation.

    So, let’s suppose, for the sake of argument, that the Reilly BMJ paper should be categorised as higher quality. How important is that to the results? Not at all, because i) the number of subjects is tiny, so the trial was not large enough to give reliable results anyway, and ii) the meta-regression analysis of all 110 homeopathy trials still showed that the trials with lowest SE show no effect, whatever games you play with the subgroups.

    What you think the omission of the polyarthritis study shows isn’t clear either. The trial was excluded on the basis of the clearly stated exclusion criteria. Including it would have destroyed the design of the study, so why you think it should have been included I don’t know. I’m not sure how I can put it clearer. How about INCLUDING THE STUDY WOULD HAVE BEEN A GROSS ERROR. Maybe the caps will help?

    You claim that “Shang had previously alerted the Lancet that he was conducting this comparison and that he predicted that his results would show homeopathic treatment to be akin to a placebo”. I don’t see a problem with this. Generally, you do have expectations of what you will find when you conduct research. These can be right or wrong, which is why you conduct research in the first place.

    You end by saying “Shang and team remind me of the Diebolt electronic voting machines that can and will change people’s votes post-hoc”. But you have yet to demonstrate that Shang and colleagues did any post-hoc analysis. The Rutten and Stolper paper you have mentioned claims that the subset of “larger” trials was selected post-hoc, but they can only have come to that conclusion by failing to read the paper carefully, as I show here.

  65. Mojo says:

    Actually, isn’t it Rutten and Stolper who have carried out a post-hoc analysis, rather than Shang et al?

  66. Badly Shaved Monkey says:

    I claim my winnings for spotting the potential for Rawlins’ paper to be abused by the woo-ists.

    http://www.badscience.net/forum/viewtopic.php?f=3&t=6453&start=0&st=0&sk=t&sd=a

    I must thank DU for providing a link to the original text and I must say that the press reports were highly misleading, as is DU’s gloss on it.

    Dana, have you actually read Rawlins’ paper? It looks like you’ve only read the press coverage but linked to the original in order to appear credible.

    Rawlins has much to say about observational studies. None of it can rescue homeopathy from the trashcan, e.g.

    I consider historical controlled trials should be accepted
    as evidence for effectiveness, provided they meet all of the following conditions:
    1 The treatment should have a biologically plausible basis. This is met by all the treatments shown in Tables 4 and 5.
    2 There should be no appropriate treatment that could be reasonably used as a control. The term ‘appropriate’ would exclude, for example, the use of bone marrow transplantation as an alternative to enzyme replacement therapy in the treatment of Gaucher’s disease.
    3 The condition should have an established and predictable natural history. I prefer this phraseology to ‘poor prognosis’. Conditions such as port wine stains may significantly impair patients’ quality of life without threatening life expectancy.
    4 The treatment should not be expected to have adverse effects that would compromise its potential benefits. This has to be a sine qua non.
    5 There should be a reasonable expectation that the magnitude of the benefits of the treatment will be large enough to make the interpretation of the benefits unambiguous. A ‘signal-to-noise’ ratio of 10 or more appears to be strongly suggestive of a genuine therapeutic effect.23,25 The magnitude of the ‘signal-to-noise’ ratio representing a ‘dramatic’ (ie 10-fold) response, however, is based on impression and is not (at present) supported by any substantive empirical evidence.”

    “Before-and-after designs, in conditions with a fluctuating natural history, are of little value”

    I strongly recommend reading the whole piece (and that includes you, Dana).

  67. Badly Shaved Monkey says:

    Has he gone?

    Well, in case he hasn’t…

    Dana, from your reading and deep understanding of Rawlins’ paper, please give one example of a study of homeopathy that would satisfy his conditions for taking an observational study seriously.

    I don’t know whether you noticed, but his proposals can be neatly encapsulated in a single question, to which I would very much appreciate a clear answer (you may recognise it);

    GIVE ONE INCONTROVERTIBLE EXAMPLE, WITH REFERENCES, OF A NON-SELF-LIMITING CONDITION BEING CURED BY HOMEOPATHIC TREATMENT.

    You have seemed singularly reluctant to try answering that question previously, but since you are so pleased with the opportunity that Rawlins’ proposals seem to offer, now would be an excellent time to remedy that situation (pun intended).

  68. Badly Shaved Monkey says:

    waiting

  69. Badly Shaved Monkey says:

    I really think he has gone.

    Given this Halloween season, I think we have stumbled across a perfect spell for repelling homeopaths. Here is that incantation again;

    GIVE ONE INCONTROVERTIBLE EXAMPLE, WITH REFERENCES, OF A NON-SELF-LIMITING CONDITION BEING CURED BY HOMEOPATHIC TREATMENT.

  70. HCN says:

    Dana is over here pushing studies that he has been told over and over and over again are not good proof for homeopathy:
    http://scienceblogs.com/denialism/2008/10/east_meets_west.php#comment-1182812

  71. HCN says:

    I guess the link to Dana Ullman on Dr. Lipson’s Denialism blog on the East vs. West in a hospital got caught in the spam filter… so I posted it twice. Sorry.

    Ullman is pushing the same studies that have been considered less than adequate proof of the efficacy of homeopathy (including the COPD study with the unbalanced groups, the non-homeopathic group was sicker than the homeopathic group, and both still got standard). He was there just yesterday.

  72. David Gorski says:

    I suspect Dana will be back after my post scheduled for tomorrow morning. :-)

  73. Skip says:

    Wow, I’m really late the the party in this tread.

    I was just going to add that I used to do basophil histamine release assays for dose responses to allergens with the RBL-2H3 cell line. We quantitated our release by both with beta hexosaminidase and by flow cytometry. I doubt my old PI would be up for it, but I think it would be fun to do a dose response experiment. We take the cells, sensitize them with IgE and then we trigger them with anti-IgE diluted 1C to… whateverC and graph the results.

    I propose the hypothesis that Dana wouldn’t like the results and would say that the experiment was invalid because we didn’t wear our Aluminum foil hats.

Comments are closed.