Articles

Homeopathy and Sepsis

ResearchBlogging.orgIt had once been suggested in the comments section of the blog that homeopathy is useful in the treatment of diseases that are not self limited. Homeopathy is effective therapy for diseases that do not get better on their own, that homeopathy has a real effect on real diseases.

One example given was for the treatment of sepsis.

“Frass M, Linkesch, M, Banjya, S, et al. Adjunctive homeopathic treatment in patients with severe sepsis: a randomized, double-blind, placebo-controlled trial in an intensive care unit. Homeopathy 2005:94;75–80. At a University of Vienna hospital, 70 patients with severe sepsis were enrolled in a randomized double-blind, placebo-controlled clinical trial, measuring survival rates at 30 days and at 180 days. Those patients given a homeopathic medicine were prescribed it in the 200C potency only (in 12 hour intervals during their hospital stay). The survival rate at day 30 was 81.8% for homeopathic patients and 67.7% for those given a placebo. At day 180, 75.8% of homeopathic patients survived and only 50.0% of the placebo patients survived (p=0.043). One patient was saved for every four who were treated.”

I am, as I have mentioned before, but I mention again for those who might be new to the blog, an Infectious Disease physician. My job is to diagnosis and treat infectious diseases and sepsis is up there at the top of the list of diseases I take of. Sepsis butters my bread, and I consider myself knowledgeable about sepsis.

And I had never heard of this study. A therapy that saves one in four patients would be an astounding intervention for an often fatal disease, and I have never come across it all my reading. Maybe its because there are so many articles and so little time. Maybe it hasn’t been publicized. Or maybe it is a lousy article.

Sepsis is not a single disease, but a syndrome. Bacteria invade an organ, like the lung to cause pneumonia or the kidney to cause pyelonephritis. Then the bacteria enter the blood stream and cause a massive inflammatory response that results in most or all the patients organs shuting down and they die. Rapidly. The inflammatory response in sepsis is mind numbingly complicated. The medical interventions used to treat sepsis are equally complicated and multifaceted.

Sepsis is as non-self-limited a disease as one could want. Untreated, almost everyone dies. Under the best current treatment, a minimum of 30% die, but depending on the underlying medical conditions and the infecting organisms, some causes of sepsis have almost 100% mortality rates. Any therapeutic intervention that decreases the mortality of this syndrome, even a little, would be welcomed.

So why I had never heard of this study, which decreased mortality a whopping 25%? That is an impressive result, if true. If true.

Now I will admit my bias is that the underlying premise for homeopathy is ludicrous, so I would not expect any homeopathic nostrum to be effective. But who knows, maybe this is the study that will revolutionize all of medicine and prove that homeopathy alters the course of one of the most difficult to treat syndromes in medicine.

So lets go over the article and see if there is a there there.

70 patients admitted to the ICU at the University of Vienna with sepsis were included in the study.

The criteria for sepsis were reasonable:

“Patients with a known or suspected infection on the basis of clinical data at the time of screening and three or more signs of systemic inflammation (temperature <= 36 or >= 38 1C, respiratory rate  >= 20/min, heart rate >= 90/min, leukocytes  >= 12 G/L) and sepsis-induced dysfunction of at least two organ systems that lasted no longer than 48 h were included.”

The two groups were well matched by clinical criteria. Same number of underlying diseases, same distribution of infections and infecting organisms. So far, so good.

Then in a double blind, placebo trial,  patients received, within 48 hours of admission to the ICU, either placebo or an individualized homeopathic nostrum.

As I gather from the study, one homeopathic practitioner, but perhaps more, evaluated the patient and determined which nostrum best suited the patient. It was then dispensed by another who did not know if they were dispensing placebo or homeopathic nostrum. All the nostrums were 200C.  That’s 1 part in

1000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000
00000000000000000000000000000000.

The known universe is not large enough to dilute one atom to that degree.

They were then given 5 globules of either placebo or the homeopathic nostrum twice a day until death or resolution of the sepsis.

What were the nostrums given? Here are the nostrums, with the indication for which it was given.

Apis mellifica: Oedema, Extreme dyspnoea.

Arsenicu malbum: Weakness, exhaustion Cardiovascular compromise, Anxiety, restlessness Cachectic appearance.

Baptisia: ARDS, Sepsis, Hot skin.

Belladonna: High temperature with sweat, Red discolouration, face.

Bryonia: Pneumonia, esp. right lung, Stitching pain in chest.

Carbo vegetabilis: Respiratory insufficiency, ARDS.

Crotalus horridus: Purpura haemorrhagica, Haemorrhages.

Lachesis muta: Septic shock, Haemorrhage High temperature, Embolism, Discolouration blue, purple.

Lycopodium clavatum: Fever, afternoon, Distension, abdominal.

Phosphorus: Pneumonia, esp. right lower lobe, Haemorrhage, Purpura haemorrhagica.

Pyrogenium: Septic fever, Offensive odour.

It is amazing, given the multitudinous signs and symptoms that you have to recognize and treat in sepsis, that the homeopath would choose offensive odor as the basis of therapy. Or hot skin. Or anxiety.

Yes, E. coli is in your blood stream, your lungs and kidneys are failing, your blood pressure cannot be maintained, you are on a respirator and dialysis and the most important symptom upon which to base the life saving therapy is offensive odor. It suggests a profound intellectual ignorance in the understanding of a severe infection.

There are several precepts underlying homeopathy. One, of course, is like cures like.

Lachesis muta is snake venom and, as best as I can tell, does not mimic septic shock. Septic shock in no way resembles death from snake venom except at the most superficial level. Carbo vegetalis is vegetable charcoal. It, as best as I can tell, does not mimic ARDS. Baptisia is derived from Wild indigo plant, and as best I can tell, when ingested does not mimic ARDS.

Prior to this study, I can find no references that ANY of these nostrums were ever used for these symptoms before, probably because without modern medical therapy everyone died before they could be tested.

Also key to homeopathic nostrums is the concept of provings. The nostrums are tested on healthy people to insure it causes the symptoms it allegedly treats. I cannot find that any of these remedies were tested with proving’s and, given the severity of the symptoms of sepsis, a proving would be fatal.

As best I can tell from perusing the homeopathic writings, there is little if any homeopathic justification (within  their therapeutic paradigm) for using these remedies under these circumstances. One wonders whether the IRB at the University of Vienna knew not only that the it was an experiment to see if homeopathy was effective, but that each remedy was being tried for the first time for these indications.

The one feature about all these nostrums is how they ignore the underlying problem and only treat the symptoms, evidently with a complete disregard of the underlying pathophysiology of sepsis.  Sepsis is fundamentally due to infections in the blood. If you don’t treat the infection, you do nothing.  The basis of chosing a homeopathic nostrum in sepsis is worse than applying a band-aid to a multiple trauma victim.

All patients were receiving standard care in addition to homeopathic nostrums. But it suggests a certain intellectual bankruptcy that when someone is admitted with bacteria in their blood stream and all the organs shutting down, actively trying to die, that the practitioner focuses on stitching pain in the chest as the diagnostic sign of importance in choosing an intervention.

The study had two end points: 30 day mortality and 180 day mortality.

30 day mortality is a standard endpoint in sepsis studies.

In this study there was a not a significant difference in 30 day mortality between the placebo and treatment groups (verum 81.8%, placebo 67.7%, P = 0.19), although the percentages look markedly different. Given the small numbers of patients in each wing of the study, I would expect insignificant differences between the two groups to show up looking greater than they are. A difference in one or two deaths would make a huge difference in the percentages. One more death in the placebo and one less in the homeopathic group and the both would be in the 70′s. With a mere 32 patients in each group, there is an insufficient number of patients to find any real significant differences. Small numbers of patients are much more likely to show spurious, random results that are clinically insignificant even if statistically significant.

Then the study becomes risible.

People die from sepsis. They die in the first day or so from the acute infection. They come in too sick to be saved. They die in the first week or so from progressive multi-organ system failure. They die in the first month from complications of medical care or underlying medical diseases. But if they make it to 30 days, they made it. If they die, it is usually not related to the sepsis.

Sepsis is, above all, a transient and completely reversible process. Once the bacteria are killed off and the immune system mops up the endotoxins, there is no disease left. The patient is cured; the disease eradicated. No further disease  pathophysiology is present. It is not like other processes, for example coronary artery disease, where the underlying disease persists.

180 days. 6 months. To think that a few days of any therapy for sepsis is going to make a difference in mortality at six months is ludicrous. It would rely upon an understanding of causality and the pathophysiology of sepsis the elude me. How a treatment of sepsis for a few days leads to a decrease in mortality months later beggars the imagination.

But that is the endpoint and they found a significant difference for mortality at 180 days.

“On day 180, survival was statistically significantly higher with verum homeopathy (75.8% vs 50.0%, P = 0.043). “

Most studies use 0.05 as the point where it is statistically significant. A p = 0.05 means that the odds are 1 in 20, or 5 % that the results are not correlated. Statistically significant does not mean clinically significant and also does not necessarily mean real. With small numbers in a trial, random variation can look significant.  A change of one death in each group would have made the p value no longer significant.

And p value is not all that significant. Why did they die? Not known. Why 6 months later as an end point? Not known. Why did homeopathy lead to an increased survival at six months, which if really true would be an important advance in sepsis? Not even hinted at in the discussion. Somehow, a few days of a 200C homeopathyic nostrum exerted some powerful medical effect on their offensive odour, increasing their chance of survival six months later.

In comparison, the biggest and best study for sepsis was the activated protein C study in the NEJM in 2001.

“A total of 1690 randomized patients were treated (840 in the placebo group and 850 in the drotrecogin alfa activated group). The mortality rate was 30.8 percent in the placebo group and 24.7 percent in the drotrecogin alfa activated group. On the basis of the prospectively defined primary analysis, treatment with drotrecogin alfa activated was associated with a reduction in the relative risk of death of 19.4 percent (95 percent confidence interval, 6.6 to 30.5) and an absolute reduction in the risk of death of 6.1 percent (P=0.005).”

This huge, multicenter study it altered how patients with severe sepsis are treated.

Note the smaller difference in effect and a much greater p value.

And they included the morality curves over time:

The discussion is of interest in that only two paragraphs are devoted to the study, mostly to comment that there are not enough homeopaths to provide the needed care for septic patient.

“Our data suggest that adjunctive homeopathic treatment may be beneficial for the survival of critically ill patients. Short-time survival showed a non-statistically significant trend in favour of homeopathy; however, this may be due to the relatively small sample size. The lack of adverse effects is an important advantage of homeopathic treatment. As a further advantage, there is no interference with traditional treatment. Dosing via the oral route is easy and possible also in intubated patients orally and patients with oral or nasal feeding tubes. Furthermore, homeopathic medicines are low cost. One constraint is the small number of trained homeopathic doctors available in this setting.

Confounding factors include that placebo patients were more seriously affected in terms of heart rate and leukocyte count. However, there was no significant difference in the means of these variables. All patients received antibiotic therapy. “

Whether it was appropriate antibiotic therapy we are not told, but why why why did it work? No thoughts from the authors.

The  rest of the discussion is a superficial tutorial on the various real ways to treat sepsis. I am guessing the average reader of Homeopathy has no understanding of sepsis, and  probably no real experience treating the disease.

Its almost as if they knew that their result was bogus.

Does homeopathy increase survival in sepsis? Only if you think that a sip of water today will decrease your chance of dying in 6 months.

This is not a lousy study; it is a joke.

===

REFERENCES:

1. FRASS, M., LINKESCH, M., BANYAI, S., RESCH, G., DIELACHER, C., LOBL, T., ENDLER, C., HAIDVOGL, M., MUCHITSCH, I., & SCHUSTER, E. (2005). Adjunctive homeopathic treatment in patients with severe sepsis: a randomized, double-blind, placebo-controlled trial in an intensive care unit Homeopathy, 94 (2), 75-80 DOI: 10.1016/j.homp.2005.01.002

2. Gordon R. Bernard, Jean-Louis Vincent, Pierre-Francois Laterre, Steven P. LaRosa, Jean-Francois Dhainaut, Angel Lopez-Rodriguez, Jay S. Steingrub, Gary E. Garber, Jeffrey D. Helterbrand, E. Wesley Ely, & Charles J. Fisher (2001). Efficacy and Safety of Recombinant Human Activated Protein C for Severe Sepsis NEJM, 344 (10), 699-709

Posted in: Clinical Trials, Homeopathy

Leave a Comment (45) ↓

45 thoughts on “Homeopathy and Sepsis

  1. BigHeathenMike says:

    Devil, meet details. Play nicely, please.

  2. Citizen Deux says:

    Set phasers to FISK…

  3. relativitydrive says:

    I love the way Homeopaths bend everything in both ways at the same time and then don’t even notice that the metal fatigue it induces has broken all their arguments. Let them treat themselves with this stuff and watch natural selection act…I can only hope!

  4. Scott says:

    I kind of have to wonder whether their methodology was to look at different endpoints until they found one for which they could claim statistical significance. If they just kept going out further until they got on the right side of the 5% chance of spurious significance, then 180 days makes a lot more sense…

  5. Michael Hutzler says:

    Even if they had picked 180 days prior to the study, with two endpoints, the chance that the findings are due to chance are greater than 5%. If there was a 5% chance that either result was due to chance, there is a 9.75% chance that one result or the other would be due to chance. The critical p-value should have been corrected, but it wasn’t. This study does not meet statistical significance at p < 0.05, even if one result did.

  6. qetzal says:

    Those numbers don’t quite add up. I can’t access the full text, but the abstract says:

    “Seventy patients with severe sepsis received homeopathic treatment (n = 35) or placebo (n = 35)…Three patients (2 homeopathy, 1 placebo) were excluded from the analyses because of incomplete data…On day 30, there was non-statistically significantly trend of survival in favour of homeopathy (verum 81.8%, placebo 67.7%, P= 0.19). On day 180, survival was statistically significantly higher with verum homeopathy (75.8% vs 50.0%, P = 0.043).”

    So the evaluated numbers should be 33 “active” vs. 34 control. That implies 27/33 in the active group survived to day 30, which is indeed 81.8% (rounded to 3 figures). But you can’t get 67.7% survival with 34 control patients. The closest you can get is 67.6% (23/34). You CAN get 67.7% if survival was 21/31, e.g. by excluding 3 more patients from the control group.

    Was placebo survival 23/34 at 30 days, and the quoted 67.7% simply a minor error? Perhaps, but it’s still a bit odd.

    It’s also odd that from the percentages, it seems only 2 additional active group patients died between days 30 & 180, compared to 6 for the placebo group. Did the authors discuss causes for these additional 2 & 6 deaths, respectively?

  7. Karl Withakay says:

    “The one feature about all these nostrums is how they ignore the underlying problem and only treat the symptoms, evidently with a complete disregard of the underlying pathophysiology of sepsis. ”

    Indeed. For all the (BS) posturing in the CAM world that evil “reductionist allopathic western” medicine treats the symptoms and not the disease or person, it is homeopathy that is exclusively concerned with the symptoms and not the underlying cause.

    In homeopathy, diseases that produce the same or similar symptoms are treated with the same remedies.

    Outside of homeopathy, the idea that diseases with the same or similar symptoms must have similar cures is a bit absurd. You’d never need to do blood work, as you’d base treatment exclusively on the set of symptoms presented. When scientific medicine seeks to cure (sometimes it can only focus on relieving symptoms), it tries to find and treat the underlying cause of the symptoms.

  8. Wholly Father says:

    Shame on you for accusing homeopathy of treating only symptoms. Does “Allopathic Medicine” have a treatment specific for pneumonia of the right lower lobe (see Bryonia and phosphorus 200C)? Now that’s disease-specific therapy.

  9. bcorden says:

    The lead author on this study is M. Frass. This study was published in the same year as another infamous study by the same author(s), except it was in the mainline journal Chest rather than Homeopathy:

    Influence of potassium dichromate on tracheal secretions in critically ill patients.
    Chest. 2005 Mar;127(3):936-41.
    Frass M, Dielacher C, Linkesch M, Endler C, Muchitsch I, Schuster E, Kaye A.

    The results in this trial are equally astounding. Using a homeopathic preparation of potassium dichromate C30 (30 dilutions of 1:100 or a dilution of 10E06) to decrease tracheal secreations of intubated ICU patients, they significantly decreased time to extubation in the treated group from 6.12 days to 2.88 days and length of stay (in the ICU?) from 7.68 days to 4.20 days.

    A C30 dilution of potassium dichromate is the equivalent of one molecule in a sphere with a radius close to that of the orbit of Jupiter.

    Papers in peer reviewed journals have consequences. Our P and T committee just approved the use of homeopathic potassium dichromate in intubated patients because of this one study. Neither the sepsis study reviewed here excently by Dr. Crislip, or this study have ever been repeated. Certainly one would have expected hospital administrators to be all over these “treatments” since they have such allegedly profound effects on mortality and, from an administrator’s standpoint, length of stay.

    This is to nothing of the ethical and legal aspects of treating critically ill patients with woo. Do homeopathic “doctors” carry malpractice?

  10. Mojo says:

    For the Chest paper, see on Orac’s Blog.

  11. Mojo says:

    I’ll try that again without messing up the formatting (I hope):

    For the Chest paper, see Homeopathy in the–cringe–ICU on Orac’s Blog.

  12. Khym Chanur says:

    Wait, wait, wait. They used a placebo-controlled study for a disease that’s fatal if not treated? Am I misunderstanding things here? Maybe the homeopathy-or-placebo was given in addition to the standard treatments?

  13. The Blind Watchmaker says:

    @Khym Chanur

    Yes, standard treatment was given to both groups.

    This study was done several years ago. It’s small size suggests that it is a pilot study. Pilot studies are really only good for suggesting an effect so that larger, more definitive studies can be justified.

    If their results were really impressive, where is the larger, multi-centered trial. Was there one? Was one done and discarded by the homeopathy crowd due to negative results?

    Even if there were a large study with negative results, it would likely be spun to sound positive (just like the recent acupuncture study in the Archives this month).

    http://www.ncbi.nlm.nih.gov/pubmed/19433697?ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum

  14. grendel says:

    Oh dear gracious flying spaghetti monster, have mercy.

    They were ALL given antibiotics AND either a placebo or homeopathic potion?

    What the hell kind of clinical trial is that?

    Was the antibiotic the same? Was the cause of sepsis the same? were the batches of antibiotic identical – oh the list is endless.

  15. Arnold T Pants says:

    Perhaps if water can retain the memory of duck liver, your corporeal humors can also retain the memory of water that retains the memory of duck liver for 180 days.

  16. grendel says:

    Arnold, you might be on to something there, but would you need to ‘succuss’ yourself in all three dimensions for it to work?

  17. Mojo says:

    “This study was done several years ago. It’s small size suggests that it is a pilot study. Pilot studies are really only good for suggesting an effect so that larger, more definitive studies can be justified.

    If their results were really impressive, where is the larger, multi-centered trial. Was there one?”

    In homoeopathy, pilot studies are often announced with press releases. For a recent example see Hill et al., Pilot study of the effect of individualised homeopathy on the pruritus associated with atopic dermatitis in dogs. Vet Rec. 2009. 164(12):364-70.

    You can find a discussion of the paper on Northern Doctor’s blog: Homeopathic clairvoyant dogs. Er, barking.

    Note that this study was thought to merit a press release from the British Homeopathic Association, and also note the quotation from the British Veterinary Voodoo Society. ;)

    If you have access, the paper, and the responses to it at Vet Rec. 2009 164: 634-636 are a hoot.

  18. SD says:

    Oh – my – GOD – are – you – fucking – kidding – me?

    Is this *really* the best you can do? This reads like a pouting cheerleader’s MySpace blog post about how the quarterback is such a jerk for having dumped her, like, the week before prom, pshyeah!

    Good Lord. Here is a synopsis of this post, distilled to its basic essence:

    1. Leadup
    2. Resume (I know infectious disease – whoa)
    3. Description of sepsis (it is teh 3vul)
    4. Admission of bias (no shit, Sherlock)
    5. Description of study and admission that study design appears valid
    6. Flaming of how stupid homeopathy is
    7. List of nuts and twigs
    8. More flaming about how stupid homeopathy is
    9. MORE flaming about how stupid homeopathy is
    10. Petulant observation that standard treatment was being applied TOO
    11. Pointless handwaving about statistics (yes, Virginia, a p of 0.19 *already* means ‘no significant difference’, no need to restate the obvious)
    12. More description of sepsis
    13. Misleading handwaving about statistics (“well, *these* statistics might look good, but they really don’t mean anything”, O RLY? Since when?)
    14. A quick slide around an unexpected conclusion of better survival at 6 months in the treated group (“well, we don’t do that analysis, it can’t be important”, well, why the hell not?)
    15. Flaming about how mechanism is not elucidated in this study (O RLY? how many studies of efficacy *do* elucidate mechanisms?)
    16. More flaming about how stupid homeopathy is
    17. More flaming, this time about how stupid homeopaths are
    18. A firm statement that the study is a joke

    Oooooooookay. In exactly none of the above do I see any indication of why precisely this study is flawed. It *appears* from your description to be soundly designed – you admit that you observed no flaw in its design or execution – and it appears also to have come across an unexpected but pleasantly significant difference between treatment and placebo whose source cannot be identified. Granted, homeopathy is kind of goofy, so there’s either something wrong with the study (not demonstrated) or something right with the treatment (implausible). You don’t want to believe there’s something right with the treatment, well, okay; but then, you have to demonstrate concretely what’s wrong with the study, and I’m not seeing anything close to that here. The bit about how the statistics aren’t important in this case, well, that’s just plain bullshit. Either the stats matter, or they don’t. Either they describe a measurable truth, or they don’t. You don’t get to pick which option is Current Truth based on whether the conclusion suits you or not – it’s all or nothing, no matter *how* goofy the null hypothesis is.

    I mean, dude. DUDE. DU-HU-HU-HU-HU-HUDE. Come. *ON*. I’m *all* about the gratuitous abuse, don’t get me wrong, but there has to be *something* underlying it, some kind of skeletal framework to lovingly frost with toxic opprobrium. Give us *something*, man. You can do better than *this*.

    “tsk tsk”
    -SD

  19. apgaylard says:

    SD.
    You just seem to have ignored the analysis provided in the post that doesn’t seem to sit well with you. Yes, the basic design of the trial may have been OK, but the post clearly showed that it had other problems.

    First, the trial was too small to provide much evidence anyway. As others have pointed out, this was, at best, a preliminary trial. A larger well-designed trial would be needed to start to make any meaningful claims. (in fact given the biases present in such trials, publication bias, and the false positive rate guaranteed by choosing p<0.05 you would really need the distribution of results from many large well-designed trials to be heavily skewed towards positive results to be able to start to make a credible case.)

    Next, they had two end points and didn’t correct for multiple inferences (see comment). Accounting for this renders the result for the second end-point statistically insignificant. So, the trial provides no good evidence against the null hypothesis that the homeopathy treatment gave the same result as the placebo.

    The post explained why the second (‘significant’) end-point was meaningless, based on what is known about the natural history of sepsis. Where is your counter argument?

    Getting upset about the parts of the post the point out the plausibility problem just doesn’t cut it. Why should anyone take a single, small, negative trial with a dodgy end-point seriously?

  20. David Gorski says:

    Kid SD’s just trolling–as usual. Mark’s analysis of this dubious trial was excellent, just the right mix of snark, science, and statistics. As for the clinical trial design, that may have been OK, but it was the analysis and interpretation of the data and results that were messed up, and Mark explained why.

    The bottom line is that this trial, despite what the authors tried to represent it as, was not any evidence in favor of the use of homeopathy. In that it was very much like the acupuncture study that Steve Novella recently wrote about. Its results do not show what the investigators claim that they show.

  21. DevoutCatalyst says:

    Can’t remember. Is it starve a troll, feed a fever?

  22. weing says:

    No, you starve the fever.

  23. SD says:

    Govorit Cde. Gorski:

    “Mark’s analysis of this dubious trial was excellent, just the right mix of snark, science, and statistics.”

    Of course you would think so, since this “mix” consists of pure snark.

    ‘As for the clinical trial design, that may have been OK, but it was the analysis and interpretation of the data and results that were messed up, and Mark explained why.”

    Comrade Gorski, even you cannot claim with a straight face that “interpreting away” a result is an honest practice.

    “sheesh”
    -SD

  24. SD says:

    apgaylard:

    “You just seem to have ignored the analysis provided in the post that doesn’t seem to sit well with you.”

    No, I didn’t “just ignore” it; it wasn’t there. That the author “just can’t believe” something, or cannot find a rational basis for it, means absolutely squat. Either we believe the numbers, or we don’t. If we don’t believe the numbers, then there’d better be something important wrong with them that he can dig out that isn’t based on his personal disbelief. If we *do* believe the numbers, then they’d better have been slipping penicillin into that distilled water, because otherwise an awful lot of chemistry is going to have to be rethought. What this means is that somebody gets to do some work figuring out the source of this unexpected result. That, my good droogies, is the epitome of *science*. If they were all easy, you wouldn’t have a job, and if you did, it wouldn’t be much fun.

    Either way, a personal problem rooted in the observation that there’s “just no possible way” for this study to be correct or relevant is not disproof, and that’s what the post was: one long string of personal problems. Funny, yeah; don’t get me wrong. Valid, no.

    “Yes, the basic design of the trial may have been OK, but the post clearly showed that it had other problems.

    First, the trial was too small to provide much evidence anyway. As others have pointed out, this was, at best, a preliminary trial. A larger well-designed trial would be needed to start to make any meaningful claims. (in fact given the biases present in such trials, publication bias, and the false positive rate guaranteed by choosing p<0.05 you would really need the distribution of results from many large well-designed trials to be heavily skewed towards positive results to be able to start to make a credible case.)”

    You are claiming that the study is insufficiently powered. Well, I hate to break the news to you, but surprising statistical result: under an assumption of homogeneity, n does not have to be very big *at all* for a study to possess sufficient power to draw a conclusion about a population. (I don’t necessarily buy this assumption of homogeneity in clinical studies – there’s too much genetic and metabolic variation in the human population for me to believe that clinical trials are prima facie valid in analyzing the efficacy of medical treatments – but I am in Rome at the moment, and when in Rome, one abides by the customs to a certain extent.) I just got done listening to Comrade Gorski jerk an n=160 trial from between his well-traveled buttcheeks in support of a position, so don’t tell me that an n=60 trial doesn’t convey sufficiently useful information. I recall being mystified by this, too – “how the hell can a sample of eight from a population of 100,000 provide sufficient power for determining distributions?” – but the math works out. Yay, nonparametric statistics. (Those numbers are off-the-cuff, but close; the necessary sample size for sufficient power is *REALLY* counterintuitively small.)

    “Next, they had two end points and didn’t correct for multiple inferences (see comment). Accounting for this renders the result for the second end-point statistically insignificant. So, the trial provides no good evidence against the null hypothesis that the homeopathy treatment gave the same result as the placebo.”

    Declaring the result for the second end-point statistically insignificant doesn’t magically make it so. Multiple inferences are not being made from the same set of data; multiple sets of data have been collected (treated 30-day survival, untreated 30-day survival, treated 180-day survival, untreated 180-day survival) and the relationships between them are being analyzed. This is all well and good. I see no flaws here: evaluating two different survival periods with two sets of data collected from the same study does not reach the level of statistical sin, nor even of statistical naughtiness. If you believe that it does, will you then be waving the black flag the next time a study comes out in JAMA that evaluates two endpoints at the same time?

    Some problems with that comment: that method of dealing with p-values is, um, fractured, and the two endpoints are not independent. You can’t survive to 180 days and *not* survive 30 days. (This assumption is built into the survival function, that being alive at time t is a prerequisite for being alive at time t+1.) You only get to do that “multiply-the-probabilities” trick *iff* the two events are independent.

    One potential confounding factor that I could see – whether or not they unblinded the trial at 30 days. That would cast those 180-day results in doubt, since the treated and untreated groups might then both partake of “cebo” effects.

    “The post explained why the second (‘significant’) end-point was meaningless, based on what is known about the natural history of sepsis. Where is your counter argument?”

    My counter-argument is that this assertion does nothing to explain the difference, a difference which, apparently, *is* there. Since the study is apparently set up correctly – at least to the extent that there is no visible flaw, other than the flaw that you ‘just can’t believe’ (personal problem!) the conclusion since it ‘has no believable basis in medical fact’ (personal problem!) – then the conclusion it produced is at least worthy of a larger study, assuming that you cannot find the flaw. That you don’t ordinarily evaluate sepsis patients for survival at six months in studies is another one of them “personal problems”; why the hell not? I know that if *I* had sepsis, *I’d* want to know what my odds of snuffing it at six months were, treated or not. Maybe there’s some kind of long-term damage that takes places with sepsis; maybe some type of inflammation. Who knows. Maybe the extra water from the homeopathic treatment made the critical difference in hydration or something. The purpose of statistics is to identify significant deviations from chance or natural process, to identify areas where Truth may be found if one digs hard enough.

    “Getting upset about the parts of the post the point out the plausibility problem just doesn’t cut it. Why should anyone take a single, small, negative trial with a dodgy end-point seriously?”

    Because I suspect that it would be taken seriously if it were either (a) evaluating a science-based treatment or (b) were *disproving* a CAM treatment?

    Somehow I don’t think that a result of “no significant difference” would make it to a blog post, whether or not the study was done correctly, except as a convenient bludgeon to smack a homeo-wacko with.

    Find the *real* flaw. This flaw is not faith-based, i.e. it has nothing to do with whether you can believe that it’s true or not.

    “just say no to faith-based mathematics”
    -SD

  25. criticalist says:

    Actually, SD has a valid point, and whilst I agree with Mark’s assessment of the paper, I think he’s mistaken about the statistical analysis.
    Here’s why. As a background, I’m an intensive care physician, and have seen this paper before. When I first saw it, I thought pretty much as Mark did – “non significant at 30 days, but significant at 180 days – no way!”. It did get me wondering a bit though, as to whether that is in fact valid criticism. Then, the following paper was published; Intensive versus Conventional Glucose Control in Critically Ill Patients, New England Journal of Medicine, Mar 26 2009.

    This was a large study of ICU patients comparing tight control of blood sugar levels with loose control. The study was only carried out while the patients were in the ICU. At 28 days there was no difference between the groups, but at 90 days there was a statistically significant increase in mortality in the tight control group. The average length of stay in the ICU was only 6 days.

    So this is analogous to our Homeopathy study. We see no difference in outcome short term, but a statistically significant difference does appear “down the line”, well after the patient has left the ICU. This study has prompted much debate in the ICU literature, wondering exactly how this is possible. The consensus seems to be that it is a genuine effect. The idea is that if you imagine the Kaplan Meier curves are superimposed on each other, and at day 1 there is a tiny improvement, then that patients curve is “tilted” just a little upwards. However, the effects of the “tilt” cannot be discerned until later as the curves diverge.

    So, if the statistical argument is not valid, what of the homeopathy paper? Is it really a new world of sepsis treatment? Well, no. I think
    the real problems are methodological.

    One of the major determinants of outcome from sepsis is site of infection. Sepsis from a urinary source say, has a much better outcome than sepsis from an abdominal source. This heterogeneity of outcome is one of the reasons sepsis trials need to be large. In this paper when we look at the “Reason for Admission” we find: Respiratory insufficiency, Sepsis or other” as the only criteria. This is completely inadequate. “Respiratory insufficiency does not even necessarily mean pulmonary sepsis, as most septic patients have respiratory failure . “Sepsis” is meaningless as a reason for admission as all the patients are septic by definition. So we have no data on the site of sepsis, and thus no way of knowing if the groups are matched at baseline. Without that information, we cannot assess the study.

  26. Prometheus says:

    Statistics Alert!

    The authors of this study failed to correct for continuity (see: Yates correction), which is needed when the numbers are so low. The corrected p-value for the 30-day point is 0.292 (non-significant to a large degree)

    The corrected p-value for the 180-day point is 0.054 (so close, yet so far!).

    Not surprisingly, when there are such small numbers involved, even large percentage differences may not be statistically significant.

    Even without the correction, the Fisher Exact test’s p-value for the 180-day point is only 0.043, which is on the quivering edge of significance. Add to that the implausibility of homeopathy and the small numbers, it adds up to non-significance.

    Mistakes in statistical analysis are sadly common in medical and scientific articles.

    Prometheus

  27. pmoran says:

    Many of the reasons why clinical studies supply spurious “positive” results (to the rather arbitrary standards commonly accepted within EBM) will not be evident from the published material. Examples are fraud, and chance.

    So I definitely reserve the right not to believe results that go strongly against all other evidence, whether there are obvious flaws in protocol and analysis or not. In my opinion it is a bad mistake to be yielding to the implication that the skeptic must either find error, or accept the findings as valid. This seems to be what SD wants.

    The clinical studies are merely one more bit of evidence to put on one side of the scales. With matters like homeopathy they have trivial weight compared to everything else.

    Far more compelling would be fundamental research validating even one of the many extremely unlikely principles that need to be true before it can work.

  28. pmoran says:

    I mean, “— before it can work other than as placebo”. Forgetting my own precepts.

  29. SD says:

    [Analysis from Prometheus]

    See? That’s what we need here.

    Okay, so for Fisher’s exact test, I get the following:

    (Survived) (Died)
    (Treated) 25 8 (33)
    (Untreated) 17 17 (34)
    (42) (25) (67)

    Confirm p=0.04319 for Fisher’s exact test using R.

    For the chi-squared test, confirm that p-value is 0.054 (by hand) *for the two-tailed test*. But I don’t *think* that’s the one you want, because that test is to determine whether or not the probability of a randomly-selected member of either group (treated or untreated) lies within the “survivor” group; the upper-tailed test is the case for which the null hypothesis is that the probability that a treated patient is a survivor is less than or equal to the probability that the *untreated* patient is a survivor. For that test, the null hypothesis is rejected at p=0.027 with the corrected test statistic. Here’s where we start getting into the argument about what’s appropriate, and I’ll say right out that man, I don’t know. After a certain amount of wandering in the forest of statistics, any answer sounds plausible. >;->

    “Even without the correction, the Fisher Exact test’s p-value for the 180-day point is only 0.043, which is on the quivering edge of significance.”

    Continuity correction for Fisher’s is a little trickier; it either adds or subtracts one-half depending on the test you’re using (upper/lower-tailed).

    Trouble with the continuity correction: it’s only used with the large sample approximation. The reason you use the large sample approximation: because it prevents you from needing to chew through a whole crapload of factorials if you’re doing it by hand (go ahead, ask me how I know that this sucks). Computer stats packages shouldn’t have this problem.

    “Add to that the implausibility of homeopathy and the small numbers, it adds up to non-significance.”

    Ah-ah-ah. It’s either significant or it isn’t. No cheesedicking and no moving the goalposts. Find the real error instead.

    “Mistakes in statistical analysis are sadly common in medical and scientific articles.”

    Well, you know, there *are* three types of lies…

    “and may God have mercy on your soul”
    -SD

  30. SD says:

    pmoran:

    “Many of the reasons why clinical studies supply spurious “positive” results (to the rather arbitrary standards commonly accepted within EBM) will not be evident from the published material. Examples are fraud, and chance.”

    Okay. Either statistics is a valid method for discovering truth to elucidate, or it isn’t. If it’s not, then *all* your SBM studies go up in smoke, and the only valid treatments are those based solely on fully-elucidated biochemical mechanisms, because all of *your* studies are subject to fraud and chance too. If it is, then you deal with the occasional study that pops up with something weird the same way you would any *other* valid study, which is to say “Huh! That’s kind of interesting!” and then busy-beaver your way through the science until you figure out what’s really going on. That answer can be “a bogus study”, or it can be “a properly-done study that nonetheless produced spurious information by accident” (hey, it happens), or it can be “an unknown and unexpected new way the world works”. What you *don’t* get to do is just blow it off. When you do that, your opposition *quite rightly* accuses you of moving the goalposts – which you are, in this case, because the study didn’t return the ‘right’ answer – and then succeeds in crucifying you before those of your colleagues who have even one iota’s worth of scientific integrity.

    “So I definitely reserve the right not to believe results that go strongly against all other evidence, whether there are obvious flaws in protocol and analysis or not.”

    Feel free to not believe it – free-ish country, Comrade Gorski’s fevered dreams notwithstanding – but expect that this attitude will be turned back on you in the future.

    “In my opinion it is a bad mistake to be yielding to the implication that the skeptic must either find error, or accept the findings as valid. This seems to be what SD wants.”

    What I want is some goddamned honesty, frankly. No, it is not okay to have one rule for you and another for your opponents. That’s not science. That’s the opposite of science. When you engage in this form of slop-think, expect to be called on it. The laws of statistics do not differ depending on whose ox is going to be gored. Man up and take the punch if it’s a good study; it’s good for you, keeps you humble, and encourages a reputation for integrity.

    “The clinical studies are merely one more bit of evidence to put on one side of the scales. With matters like homeopathy they have trivial weight compared to everything else.”

    No, they have exactly the same weight compared to everything else. What the hell is wrong with you people? If a study demonstrates a significantly better outcome for patients when the doctor speaks like Donald Duck, treats the patient while standing on one foot, and wears a top hat, then don’t medical ethics demand that you start doing these things *to provide the best possible treatment for your patients*? Or is that just a load of crap you shovel out whenever it’s most convenient for your cause?

    “Far more compelling would be fundamental research validating even one of the many extremely unlikely principles that need to be true before it can work.”

    If we knew how it worked, then it wouldn’t be called ‘research’, now would it?

    Again: either statistics is a valid means for finding places to dig, or it isn’t. If the study is flawed, then find out what’s wrong with it. Small N ain’t it, or at least, cannot be used to justify an attack on this study without casting a huge number of your own in doubt. At the very least, if you demand larger N, a second study is justified with these results. Unbelievability of mechanism ain’t it, either. (If the study says it’s true, then I guess you’d better start believing, then, hadn’t you?)

    “don’t stop thinking about tomorrow”
    -SD

  31. David Gorski says:

    Many of the reasons why clinical studies supply spurious “positive” results (to the rather arbitrary standards commonly accepted within EBM) will not be evident from the published material. Examples are fraud, and chance.

    So I definitely reserve the right not to believe results that go strongly against all other evidence, whether there are obvious flaws in protocol and analysis or not. In my opinion it is a bad mistake to be yielding to the implication that the skeptic must either find error, or accept the findings as valid. This seems to be what SD wants.

    Indeed. One reason is that, at a very minimum, 5% of the time, even perfectly designed clinical trials will be falsely “positive” by random chance alone, given that the 95% confidence interval is so universally used. There is no such thing as a “perfect” clinical trial; so the number is actually higher.

    But it’s even more than that, as Dr. John Ioannidis, who has shown that the false positive rate is probably considerably higher than 5% for clinical trials that test highly implausible hypotheses–like homeopathy.

    Steve Novella’s written about it before:

    http://www.theness.com/neurologicablog/index.php?p=8

    Alex Tabarrok did perhaps the clearest explanation of the significance of Ioannidis’ findings:

    http://www.marginalrevolution.com/marginalrevolution/2005/09/why_most_publis.html

    He probably overstates how often the null hypothesis is incorrectly rejected, but his basic reasoning is sound.

    And a “friend” of mine wrote extensively about Ioannidis’ findings as well:

    http://scienceblogs.com/insolence/2007/09/the_cranks_pile_on_john_ioannidis_work_o.php

    Personally, I’ve been hoping over the last couple of years that Ioannidis would some day turn his skills to examining CAM research. As one commenter said elsewhere, he could entitle it something like, “If you think science-based medicine clinical trials have problems, wait until you get a load of this.” Unfortunately, thus far I have been disappointed.

    The bottom line is that scientific results–particularly in medicine–span a continuum, and there will always be, thanks to statistical flukes, bad study design, or, rarely, outright fraud (or a combination of some or all), results that are anomalous. Sometimes such anomalies will lead to a rethinking of a principle and new scientific understandings, but more often they are simply anomalies. Cranks will always be able to find one study–or a handful of studies–supporting their pseudoscience. That is why the totality of the literature needs to be evaluated in coming to scientific conclusions about a question in scientific medicine.

    Scientific plausibility also counts. Oh, there may be gray areas, but homeopathy isn’t one of them. It’s a modality that is so implausible that, for it to be true, whole branches of well-established physics, chemistry, and biology would have to be overturned. To do that, evidence for homeopathy at least as compelling as the evidence supporting those fields that say homeopathy can’t work would have to be presented, and that evidence just doesn’t exist. Certainly a study with a barely statistically significant result (if you accept the authors’ statistics, which are questionable) that is modest at best doesn’t count. In cases such as this study, the most parsimonious conclusion is that this is an anomalous study that is spuriously positive through random chance and the vagaries of clinical trial design. It would take a lot stronger of a result plus many additional similar studies to provide evidence that the well established scientific principles that preclude homeopathy from working should be seriously reconsidered.

  32. daedalus2u says:

    No SD, statistics is only a tool, and like any tool, it is only useful when used in the proper place for the appropriate task by people who know what they are doing.

    Scientific results can’t be looked at in isolation. All scientific results have to “fit” with all other scientific results. If they don’t “fit”, then there is an anomaly which needs to be explained.

    Homeopathy doesn’t “fit” with the rest of science. For homeopathy to be correct, much of what is well known in science would have to be wrong, and wrong in ways that would make homeopathy be correct. That “wrongness” is not consistent and would have to change depending on which aspect of homeopathy is considered to be correct.

    Any theory of reality that includes homeopathy as being correct has to have better predictive value than the current theory of reality that we have (which rejects homeopathy as bogus crap) for us to adopt it. Any such theory has to both explain why homeopathy is correct, and why each and every bit of the collective wisdom of hundreds of thousands of scientists is wrong.

    All the data generated to date by all homeopaths is completely consistent with all therapeutic effects of homeopathy being due to the placebo effect. Why should we reject the science that has produced our modern scientific world in favor of some poorly thought out ideas from 200 years ago?

  33. Mark Crislip says:

    Given my understanding of the pathophysiology of sepsis
    The symptoms for which the nostrums were chosen to treat were ridiculous.
    The 6 month endpoint was inane.
    If you apply statistics to nonsense, you get statistically significant nonsense, but nonsense not the less.
    The only equally goofy analysis was the flying spaghetti monsters relationship between global warming and the number of pirates.
    Not the same statistics, but the same validity.

    I was dumped by the quarterback, I ask that you don;t bring up the emotionally traumatic experience again.

  34. criticalist says:

    I would still disagree with saying the 6 month endpoint was “inane”. As a result of the NEJM NICE study there is now a growing realisation that the standard 28 day mortality endpoint is inadequate, and a 3 month (or perhaps even 6) should be looked at instead.

    The problem with the homeopathy study was not the endpoints but that there is not enough information presented to to determine if the groups were matched at baseline.

  35. pmoran says:

    We don’t have a double standard. Few drugs get FDA endorsement without at least three studies from independent researchers showing the same statistical outcomes and preferably no negative ones. No homeopathic treatment has come close to this.

    Even this is not enough to sustain a drug or medical procedure within mainstream medical practice. Important treatments are subjected to dozens of additional studies by an ever-expanding panel of researchers. There are many instances where later research has resulted in methods being withdrawn for ineffectiveness.

    We have only recently become aware of all the ways in which drug companies and ambitious researchers have been able to manipulate the research system. Don’t imagine that our wariness regarding isolated and unexpected research findings is confined to those from “alternative” medicine.

  36. wertys says:

    SD, if you want homeopathy to be accepted scientifically then homeopaths have to plausibly show how their contravention of known scientific dogma can be consistent with all other fields of science.

    or they could just get over it and admit that it’s water..

  37. SD says:

    “Given my understanding of the pathophysiology of sepsis”

    Personal problem!

    “The symptoms for which the nostrums were chosen to treat were ridiculous.”

    Personal problem!

    “The 6 month endpoint was inane.”

    Personal problem!

    “If you apply statistics to nonsense, you get statistically significant nonsense, but nonsense not the less.”

    Yes, but you have to find out *why* the nonsense contains significance. You can say ‘happened by chance’ if you do a couple more studies and the effect disappears (that puts it in the realm of parapsychology and other non-reproducible phenomena; interesting, but irrelevant). You can say ‘bogus study’ if you discover that homeless drug addicts were being offered crack to ‘replace’ patients in the study to hide the terrible secret that everybody enrolled in the study died. Otherwise, you can say ‘Huh, that’s interesting’. That you cannot identify a mechanism of operation does not mean anything other than that you cannot identify a mechanism of operation. Absence of evidence is not evidence of absence. In addition, statistical conclusions are independent of the presence of personal problems with the hypothesis. Put on your big-girl panties and deal with the cognitive dissonance.

    “The only equally goofy analysis was the flying spaghetti monsters relationship between global warming and the number of pirates.
    Not the same statistics, but the same validity.”

    Okay, you’re not feeling me here. If a study highlights a statistically significant connection between ‘global warming’ (*spit*) and piracy, then there *is* some type of connection there, assuming that the study is done properly. If we repeat this study until we have accumulated enough data to achieve some arbitrary confidence value for our conclusion – assuming that those studies are properly done in the first place – then the question about whether or not there’s a connection becomes irrelevant. The question becomes “*What’s* the connection”? And that’s where the science happens. (This is kind of a bad example, since it’s creeping into the dark, demon-ridden wasteland between sociology and economics, neither of which are really much of a science.) Hypothesis: ‘global warming’ (*spit*) changes the migration patterns of fish, leading to fishermen becoming pirates. See? This hypothesis that can be tested: ask pirates what they used to do for a living; determine whether or not ‘fisherman in a location with a depressed fishing industry’ is a good predictor of ‘becoming Captain Blackbeard’. There are other possible predictors. Crunch data. See if conclusion is supported.

    Oh, wait – wait for it – what stats package do you use to tease out this conclusion…?

    “R!”

    … Please don’t hurt me. >;->

    “I was dumped by the quarterback, I ask that you don;t bring up the emotionally traumatic experience again.”

    I’m sorry, ma’am. Won’t happen again.

    “with that kind of a pitch, you just *have* to take a swing”
    -SD

  38. Mark Crislip says:

    “You may be interested to know that global warming, earthquakes, hurricanes, and other natural disasters are a direct effect of the shrinking numbers of Pirates since the 1800s. For your interest, I have included a graph of the approximate number of pirates versus the average global temperature over the last 200 years. As you can see, there is a statistically significant inverse relationship between pirates and global temperature.”

    http://www.venganza.org/about/open-letter/

    I thought this was well know information.

  39. David Gorski says:

    SD obviously doesn’t understand the oldest rule in science and epidemiology: Correlation does not necessarily equal causation. Either that, or he’s trolling again. Take your pick.

  40. tmac57 says:

    *Satan’s* *Defender* ?

  41. Mojo says:

    @David Gorsky:

    Indeed. One reason is that, at a very minimum, 5% of the time, even perfectly designed clinical trials will be falsely “positive” by random chance alone, given that the 95% confidence interval is so universally used. There is no such thing as a “perfect” clinical trial; so the number is actually higher.

    See, for example, the figures produced from time to time by the British Homeopathic Association and the closely allied Faculty of Homeopathy, as to the number of trials of homoeopathy they’ve found:

    Here, for example, is what appears to be a BHA press release stating that up to the end of 2005 they had found 119 randomised peer-reviewed clinical trials (RCTs) of homeopathy, 49% of which were positive, and the rest of which were either “inconclusive” (48%) or “negative” (3%). That works out to 58 positive, 57 inconclusive and 4 negative (“inconclusive” meaning those that didn’t show a statistically significant effect, as far as I can tell). The same figures have also been cited by Peter Fisher.

    Here is a FoH document on the BHA’s website, stating that “up to the end of 2008, 136 RCTs had been published: 59 positive; 9 negative; 68 not statistically conclusive.”

    So a bit of simple arithmetic shows that between the end of 2005 and the end of 2008, there were 17 new trials, one positive, 11 “inconclusive” and five “negative”.

    That’s not far off 5% positive.

  42. khan says:

    I do appreciate knowledgeable folks explaining statistics. And showing how the woo folks datamine.

    I have some background in statistics & understand the screaming inanities of the true believers.

  43. Zetetic says:

    I still wonder if SD has a job.

  44. Mojo says:

    See, for example, the figures produced from time to time by the British Homeopathic Association and the closely allied Faculty of Homeopathy, as to the number of trials of homoeopathy they’ve found:

    Here, for example, is what appears to be a BHA press release stating that up to the end of 2005 they had found 119 randomised peer-reviewed clinical trials (RCTs) of homeopathy, 49% of which were positive, and the rest of which were either “inconclusive” (48%) or “negative” (3%). That works out to 58 positive, 57 inconclusive and 4 negative (”inconclusive” meaning those that didn’t show a statistically significant effect, as far as I can tell). The same figures have also been cited by Peter Fisher.

    Here is a FoH document on the BHA’s website, stating that “up to the end of 2008, 136 RCTs had been published: 59 positive; 9 negative; 68 not statistically conclusive.”

    So a bit of simple arithmetic shows that between the end of 2005 and the end of 2008, there were 17 new trials, one positive, 11 “inconclusive” and five “negative”.

    That’s not far off 5% positive.

    It seems I owe the BCA an apology: they’ve updated that second paper (I was looking at an old version I printed out a couple of months ago) to include two more trials, one positive and one negative.

    So they’re now up to just over 10% positive over the last three years.

  45. David Gorski says:

    10% positive is still under the number of false positives Dr. Ioannidis’ work would predict for clinical trials testing an ineffective remedy. Add to that publication bias and the file drawer effect, and it’s actually rather sad that homeopaths can only produce 10%.

Comments are closed.