Articles

Homeocracy IV

In the three prior posts of this series I tried to analyze some of the defects in the randomized clinical rials (RCTs) of homeopathic remedies for childhood diarrhea. The first entry showed that the first two RCTs’ (done in Nicaragua) methods could not produce a meaningful result because of the way the RCTs were set up (methods.) The second entry showed that the results obtained in the first two trials were meaningless clinically even if assumed to have resulted from more legitimate methods. The same applied to the third trial in Nepal, analyzed in the third entry.

This entry  will suggest that the authors’ fourth paper (Jacobs J, Jonas WB, Jimenez-Perez M, Crothers D. Homeopathy for childhood diarrhea: combined results and metaanalysis from three randomized, controlled clinical trials.  Pediat Inf Dis J, 2005;22:229-234.)- a meta-analysis (MA) of the data from the three RCTs resulted in conclusions equally as meaningless as those of the three trials.

The MA authors – several of the same workers from the three RCTs – begin by agreeing that the data from the RCTs, taken individually, were of borderline significance:

In our previous three studies, we evaluated the use of individualized homeopathic treatment of childhood diarrhea … The results of the two larger studies (n = 81, n = 116) were just at or near level of statistical significance. Because all three studies followed the same basic study design , […] we analyzed the combined data from these three studies to obtain greater statistical power.  In addition we conducted a meta-analysis of effect-size difference […] to look for consistency of effects.

MAs and systematic reviews (SRs) are the two consensus methods for summarizing data from multiple individual studies. The inclusion and search methods of RCTs for SRs and MAs are similar, but the objectives of the two are a bit different, as are the forms of the reports.  In SRs, the results are summarized  in more in narrative form, whereas in MAs the data are treated mathematically and the results are defined in statistical terms.  Thus authors of SRs are freer to speculate on the degree of confidence that a method is effective based on what is shown by the numbers of positive and negative RCTs collected.  Authors of MAs usually limit their comments to what the mathematical formulation of the summarized data show.

I am not a statistician, and will not comment so much on the mathematical aspects of the MA in question here, but will point out that 1) themethods used were standard and reported credibly, and 2) the problems found in the RCTs contribute to invalidating the conclusions of the MA.  A common warning to authors of MAs  is that the outcome of any MA depends directly on the reliability of the individual RCTs.

But what did this MA show? The primary outcome measure was the number of days from entry until the presence of no more than 2 stools per day for 2 days.  247 subjects entered the study and 230 completed the study through to the final end point (nearly equal numbers in each arm dropped out or had incomplete data.)  There was an overall 18.5 percent difference between the homeopathic treatment and the placebo control groups, with the reduction from 3.8 days in the control group to 3.1 days in the treatment group. (P=0.008). The P value appears to impart a high significance to the result. But similar problems that played out in the RCTs played here in the MA as well.

Examination of the Kaplan-Meier plot showed that at each day after day 0 the homeopathy group showed  a less likely presence of diarrhea than the control group. (Kaplan -Meier curves are constructed to show probabilities, not actual events.) But at day 1 (24 h) the difference was about 15percent, the same at day2, but at 3 to 4 days, the period of the primary measure, the difference was greatest at about  25-30 percent. By the fifth day the difference between the two groups was down to about 10 – 15 percent. In other words, the greatest statistically significant period in the study happened to occur at the time of the selected end point, but was much less at the other points.  There is no evidence of data manipulation here, just an odd finding. I cannot account for it unless that difference is a normal characteristic of such studies.  Odd.

But shedding the arithmetic and statistical wording, and describing what actually occurred,  even at maximum difference between homeopathy and controls, was a difference between having 3 (or slightly more) stools per day and no more than 2 stools per day for 1 day or so at only one of four measured periods of the study.  The calculated difference in terms of days  to the end point was slightly more than a half a day. Most patients and most family members would hardly be aware of such a small difference. A sort of homeopathic difference.

In addition, the authors sought to find any differences between the groups that could affect outcome such as age, size, that occurred during assignment or randomization, and found that there was a difference in assignment that favored the homeopathy groip (P =  .025.) Not a great difference, but an indication of how much chance differences can contribute to study outcomes, that combining studies and increasing numbers does not always even out random imbalances. This difference also suggests  that sometimes there can be hidden biases in apparently well carried-out studies.

So the MA showed the same kinds of outcomes as the three individual studies did, but with a smaller P value from the larger number of subjects via the combining of the data. Pretty much what would have been expected.  But for a minor symptom, and for results that should not be extended to the general population, since the  most severely affected children were hospitalized and excluded from the study.

However, in the political system of homeocracy, rules change at the power of thought, scientific method bcomes a “living, breathing document”, the principles of which are malleable to fit circumstances. Thus, despite the minimal findings from this series of studies, the minimal boost in significance from the MA, the authors conclude again that homeopathy might be added to the therapy of childhood diarrhea because although the advantage is small, the treatments are harmless, the world-wide clinical problem is great, and so homeopathy would be an advantage to public health.

One can still wonder how papers with such comments pass editorial and peer review – but then, papers with famous names tend to get attention (Wayne Jonas is former Director of the Office of Alternative Medicine and a homeopath.)

Posted in: Clinical Trials, Energy Medicine, General, Homeopathy, Science and Medicine

Leave a Comment (5) ↓

5 thoughts on “Homeocracy IV

  1. Wallace Sampson says:

    I should have credited the authors for carrying out an analysis for effect size on the three RCTs, which confirmed the consistency of the RCT results – although that was seen just eyeballing the results. The result tends to validate the credibility of the reports and of the MA. However, it does not negate the RCT and conclusion defects listed (or others that other readers might see.)

  2. Karl Withakay says:

    Would it be fair to say that an MA is unlikely to be of better quality than the studies that comprise it? (Garbage in = garbage out)

    I would think that the biggest reasons for MA’s is to either provide statistically significant conclusions from studies that were too small to do so (or whose results were too small to be significant), or to reconcile a body of (quality) studies that produced differing results.

  3. Wallace Sampson says:

    karl Withakay:

    Yes to all. I did not want to go too far in judging value with this post, hoping whatever readers were still enough interested to create “their own conclusions.”

    The reason given by the MA authors was just that: to lend more credibility to their RCT findings by inceasing the N and thus if possible, reducing the P (the likelihood the RCT results were due to chance.)

    In an arithmentical sense, they succeeded. But in a realistic sense nothing changed. The basic premises of the trial – defined in the the methods – were not in accord with the assumptions made in collecting data that are treated by RCT statistics or by MA statistics. Both assume that the data are representative of the population, and with characteristics that are common to all subjects and to all study arms – homogeeous.

    In this series, each patient was treated individually, with only a small number being treated the same homeopathic preparation.

    In addition, only the symptom – diarrhea – was treated, the causes having been multiple – several viral, bacterial, and unknown. And, even the diarrhea had different characteristics, each one calling forth its own remedy. Besides, those characteristics probably varied from day to day and from hour to hour, yet the treatment was based on only one snapshot in time and continued through the course despite the changes that occured in stool characteristics.

    So, it is invalid to apply standard RCT statistical methods – even non-parametric – to such a heterogeneous collection of data bits. Nevertheless, they came up with three studies all with similar results. How come? Any answers out there?

    Can we start with whether or not the studies were carried out as stated? Was there some systematic error we do not see?

    Perhaps all readers would have to study the reports carefully before answering, like I did. I found no foolproof suspect infobit.

    But when faced with improbable results, what are the possibilities? Here are numerous materials that we agree should be inactive, yet with positive although minor results.

    I recall one editorial in Lancet after one of the Reilly papers of 10 + years ago, stating that what the study compared was two placebos, yet with a positive outcome. When that happens, which is more likely, the improbable results, or that an error or manipulation happened? Several of us know some of what happened in that Lancet series. Most of us accept the Ioannidis proposal that most research findings are false (at least wrong.)

    What else can one say about this set ??

    WS

  4. Karl Withakay says:

    Wallace, thanks for the reply and excellent post.

    I find it interesting that while the “CAM” world accuses scientific medicine of treating the symptoms and not the disease, homeopathy via the law of similars is exclusively concerned with the symptoms in the treatment of a disease, and not the underlying cause. Like supposedly cures like, and what causes a symptom is supposed to cure anything that causes the same symptom, and that concept is essentially universal for all diseases.

  5. DanaUllman says:

    I’m glad that Sampson acknowledges that he is not a statistician…and it shows. The bottomline is that with self-limiting conditions, the significance between treatment and control groups predictably disappears over time. However, the fact of the matter is that there was a “significant” difference in improvement sooner in the homeopathic treatment group.

    Since Sampson is an oncologist, maybe he can explain how cancer cells and genes can respond to several homeopathic medicines in high potency, while the controls did not. I bet that Sampson claims that the cancer cells were pro-homeopathy, and they wanted to change.

    A new study was just published in a journal, called eCAM (published by Oxford University Press), a journal that has become the most respected peer-review publication in the field of alternative and complementary medicine. Of special interest is the fact that this study has repeated shown that homeopathically potentized doses have dramatic effects on various kinds of cancer cells, not just in the short-term but the long-term. This research also shows that various homeopathic medicines have dramatic effects on gene expression (this is the type of evidence that conventional drug companies LOVE to see for their drugs…and there is increasing evidence that homeopathic medicines have this profound effect).

    Sunila ES, Kuttan R, Preethi KC, Kuttan G. Dynamized Preparations in Cell Culture. Evid Based Complement Alternat Med. eCAM 2009;6(2)257-263 doi:10.1093/ecam/nem082
    http://ecam.oxfordjournals.org/cgi/content/abstract/6/2/257?etoc

    ABSTRACT:
    Although reports on the efficacy of homeopathic medicines in animal models are limited, there are even fewer reports on the in vitro action of these dynamized preparations. We have evaluated the cytotoxic activity of 30C and 200C potencies of ten dynamized medicines against Dalton’s Lymphoma Ascites, Ehrlich’s Ascites Carcinoma, lung fibroblast (L929) and Chinese Hamster Ovary (CHO) cell lines and compared activity with their mother tinctures during short-term and long-term cell culture. The effect of dynamized medicines to induce apoptosis was also evaluated and we studied how dynamized medicines affected genes expressed during apoptosis. Mother tinctures as well as some dynamized medicines showed significant cytotoxicity to cells during short and long-term incubation. Potentiated alcohol control did not produce any cytotoxicity at concentrations studied. The dynamized medicines were found to inhibit CHO cell colony formation and thymidine uptake in L929 cells and those of Thuja, Hydrastis and Carcinosinum were found to induce apoptosis in DLA cells. Moreover, dynamized Carcinosinum was found to induce the expression of p53 while dynamized Thuja produced characteristic laddering pattern in agarose gel electrophoresis of DNA. These results indicate that dynamized medicines possess cytotoxic as well as apoptosis-inducing properties.

    And while I’m here, have you seen this new great article about the dirty tricks and tactics of quackbusters?

    http://www.jpands.org/vol14no2/clay.pdf

    Welcome to the world of the double-standard of quackbusters.

Comments are closed.