Articles

CAM and Fibromyalgia

ResearchBlogging.orgOne of the common themes regarding alternative medicine is the reversal of normal scientific thinking. In science, we must generally accept that we will fail to validate many of our hypotheses. Each of these failures moves us closer to the truth. In alternative medicine, hypotheses function more as fixed beliefs, and there is no study that can invalidate them. No matter how many times a hypothesis fails, the worst that happens is a call for more research.

Sometimes this is the sinister and cynical intent of an alternative practitioner—refuse to let go of a belief or risk having to learn real medicine. Often, though, there are flaws in our way of thinking about data that interfere with our ability to understand them.

This week, the New York Times had a piece on alternative therapies for fibromyalgia. First a little background.

Fibromyalgia syndrome is a poorly-understood and controversial pain syndrome. In brief, it identifies patients who have significant chronic pain which is not due to any identifiable pathology. It probably includes a heterogeneous group of problems, but our understanding is limited. There may be changes in the way the nervous system deals with pain signals, but even this is not yet clear. It’s a disorder that can be very frustrating to treat, and even more frustrating to have. It is often co-morbid with depression, and the pain can be quite resistant to treatment.

Some practitioners deal with this by rejecting the diagnosis as being vague and useless. Others use the limited evidence we have to develop a treatment plan. And yet others turn to alternative medicine, and that is the topic of the Times piece. The article is a brief presentation by an expert with Q&A in the comment section.

One exciting area of research in the past decade has been in the realm of complementary and alternative medicine, or CAM, treatments for fibromyalgia. These range from well recognized therapies like acupuncture and massage to more novel treatments like d-ribose and qi-gong.

As this research grows, it is increasingly possible to identify CAM therapies that have some evidence of efficacy and minimal risk that can be incorporated right along with the more conventional treatment recommendations.

This is the typical claim of alternative medicine: it’s relatively harmless, and might even help. But what does the evidence say, and what are we to make of it?

One of our goals here at Science-Based Medicine is to recognize that in traditional evidence-based medicine it is easy to become overly reliant on the results of randomized-controlled trials (RCTs). While EBM does take into account the concept of plausibility, this is often lost when the data is “hot”. A paper looking at CAM therapy for fibromyalgia was recently released, and can serve as an example of how to think about these problems.

By way of introduction, let’s look at the abstract:

Best evidence was found for balneotherapy/ hydrotherapy in multiple studies. Positive results were also noted for homeopathy and mild infrared hyperthermia in 1 RCT in each field. Mindfulness meditation showed mostly positive results in two trials and acupuncture mixed results in multiple trials with a tendency toward positive results.Tendencies for improvement were furthermore noted in single trials of the Mesendieck system, connective tissue massage and to some degree for osteopathy and magnet therapy. No positive evidence could be identified for Qi Gong, biofeedback, and body awareness therapy.

This study looked at other studies to see how many showed promise in CAM therapies. The authors’ conclusions were based on the “positivity” of RCTs, that is, a modality was seen as possibly effective if the RCTs supported the hypothesis. This was despite the finding that overall, the studies were of mediocre quality. There are a number of flaws in this approach.

The authors state explicitly their reliance on the reputation of RCTs:

Most of the represented publications review both randomized controlled trials (RCTs) and non-RCTs. Though RCTs are considered as the strongest research basis for clinical recommendations in evidence-based medicine, RCTs are particularly difficult to perform in many categories of CAM medicine. An individualized approach to
the patient in diagnosis and therapy is often already part of the healing process itself. This makes standardization and the creation of control groups in order to rule out so called ‘placebo effects’ often very challenging, and blinding of both patients and medical practitioners sometimes impossible. Thus, it is not surprising that many authors focus on different study designs to fully cover the field. However, as RCTs are considered to be less liable to bias, there is also need for publications that focus on only these kinds of clinical trials. In 2002, a methodologically impressing publication appeared that covered RCT research on non-pharmacological approaches in fibromyalgia [24].

The authors re-iterate the special pleading used by CAM advocates to avoid being subject to scientific investigation, but decide to focus on RCTs to avoid the issue. What they fail to do is explicitly state what RCT results mean, beyond being positive and negative.

Plausibility

Dr. Harriet Hall would remind us of Tooth Fairy Science. We can measure all of the important data about the tooth fairy, including average get per tooth, average age of visitee, etc, but if we forget to question the fairy’s existence, we have failed to ask the most important question. It may be true that an RCT showed improvement in fibromyalgia patients using homeopathy, but since homeopathy is water, there is no reason to expect causality, and the results may be better explained by some other phenomenon.

This is explained mathematically by Bayes’ Theorem. If the prior probability of a positive result being due to the intervention is very low (say, because of implausibility), then any positive finding is very likely to be due to chance rather than causality.

Confounding natural variation with causation

Fibromyalgia is a syndrome whose symptoms naturally wax and wane. It can be very easy to confuse a change in disease state that occurs during a study with an actual effect. Rigorous controlling can minimize this but not prevent it. If, by chance alone, subjects in the treatment group had improvement in their disease due to its natural history, this will look statistically like a “win”. This makes the study of such disorders difficult, and opens a big door for CAM, as it is easy to convince others to follow your misattribution of cause. This is similar to concepts such as lead-time bias and regression toward the mean.

Built into this is the common cognitive error of confirmation bias. If you are a believer in the intervention, you may be prone to attribute positive results to the intervention even if there is no causation.

Damned statistics and replicability

The statistical tools we use to interpret RCTs are designed to help us tell systematic variations in the data from chance alone. There are a number of arbitrary assumptions built into this system. For example, if results are described by a normal distribution, we may define “abnormal” as the highest and lowest 2.5% of results. If a single RCT shows statistically promising results (say, >2.5 SDs from the mean), then it’s “positive”—but this still may be due to chance alone. A well-designed study can minimize the chance of this result being due to chance alone but cannot eliminate it. This is one of the reasons a single positive test for a less plausible hypothesis must be replicated before we get too excited.

The bottom line

Fibromyalgia is a complicated syndrome whose very nature makes it susceptible to the abuses of CAM practitioners.  When evaluating a therapy for a complex disorder whose natural history is variable, we must very carefully parse out causation from correlation, recognize our own biases, and remember that a positive result of a randomized-controlled trial does not necessarily confirm a hypothesis.  If an intervention has no plausible way or working, any positive results are likely a statistical artifact. Science is hard work, but the results are worth it.

___________________________________
References

Baranowsky, J., Klose, P., Musial, F., Haeuser, W., Dobos, G., & Langhorst, J. (2009). Qualitative systemic review of randomized controlled trials on complementary and alternative medicine treatments in fibromyalgia Rheumatology International DOI: 10.1007/s00296-009-0977-5

Posted in: Science and Medicine

Leave a Comment (243) ↓

243 thoughts on “CAM and Fibromyalgia

  1. manixter says:

    Fibromyalgia is a very helpful diagnosis in that it immediately notifies the astute practitioner that the patient is crazy…

  2. @manixter,

    I can’t agree with that statement, however lightly it may have been intended.

    That statement implies that there is nothing wrong with the patient other than a delusion; the patient likely has a legitimate problem that causes them real suffering, and even if that problem is pure delusion, the suffering (even if it is just psychological) is real, and the attitude is not helpful.

    People cling to other similar diagnoses for idiopathic suffering like Morgellon’s, Chronic Lyme, and Gulf War Syndrome partly because they feel having a name for their condition gives them some power over their problem, and they feel that attempts to invalidate the diagnosis is an attempt to invalidate or deny their suffering and take that power away.

    One thing that SMB skeptics/ critical thinkers need to always keep in mind is to not deny someone’s suffering while questioning the diagnosis/theory of the cause of that suffering, however hard that may be to do.

  3. “An individualized approach to the patient in diagnosis and therapy is often already part of the healing process itself.”

    I don’t think anybody that uses such a statement realizes it can be interpreted as an acknowledgment that nearly all CAM is an elaborate placebo.

    ” so called ‘placebo effects’ ”

    What’s with the so called and single quotes?

    You’ve got to hand it to these advocates of so called ‘alternative medicine’ :)

  4. wondering says:

    “a positive result of a randomized-controlled trial does not necessarily confirm a hypothesis.”

    No, it doesn’t, but repeated positive results might.

    “If an intervention has no plausible way or working, any positive results are likely a statistical artifact.”

    If you don’t understand how or why an intervention works, that doesn’t necessarily mean it doesn’t work. You can’t ignore RCTs and statistics merely because you don’t agree with them.

  5. Peter Lipson says:

    Wondering, you may need to read it more carefully, or have me explain it better, or read the links on prior probability.

    Correlation does not prove causation. While there may be a causal link between two variables, certain things make it less likely, and some RCTs can, in fact, be ignored, or at least properly interpreted.

    If the prior probability of something working (say, fairy dust for melanoma) is very low, but you still get a positive result on an RCT, it is mathematically very likely that you have a false positive. It is also likely that there are other, better explanations for your results than fairy dust helping melanoma.

  6. wondering says:

    “Correlation does not prove causation.”

    The purpose of experimental controls is to demonstrate causation. If the experimental group gets the treatment and the control group gets a placebo, and there is a statistical difference between the groups, then the treatment probably caused the difference. That is the whole point of doing RTCs.

    If you do only one experiment, you cannot draw any definite conclusion based on inferential statistics. Depending on the cutoff, there may be a 1 percent chance the treatment is not effective. That is why RCTs must be replicated by other researchers.

    “If the prior probability of something working … is very low, but you still get a positive result on an RCT, it is mathematically very likely that you have a false positive.”

    Actually no. Let’s say your p value (the probability of the results occurring by chance) is .001 — then no matter what the prior probability might happen to be (and different people will disagree about prior probabilities), a false positive is still unlikely.

    Of course a false positive is always possible, but the p value tells you how likely it is. And the more times your RCT is replicated, the less likely it was a false positive.

    Prior probability can be taken into consideration, and it actually always is — no one devotes time, effort and resources testing a theory they believe is absurdly implausible. And no one bothers testing a theory that is highly plausible and already accepted.

    The purpose of research is to test theories that someone thinks might be true. If you apply for a research grant you would not say “My theory is almost certainly true and I want to spend the next year finding out if it really is.”

    You also would not request funding to test a ridiculous theory that you feel is implausible.

    So prior probability is very often in the mind of the researcher. If hundreds of carefully controlled experiments have already shown your theory to be false, then yes it has a low prior probability.

    But for questions that have not already been answered by experiments, prior probability is largely subjective and not quantifiable.

  7. OZDigger says:

    In regard to the use of placebos, this came from the New York Times.
    “Social support and beliefs affect a patient’s ability to rebound from illness, Dr. McDiarmid added, pointing out that over half of the people who respond to antidepressants do so because of the placebo effect.”

    http://www.nytimes.com/2009/09/20/us/20shaman.html?_r=1&em

    So it would appear that the use of placebos is alive and well in science based medicine these days. Anti-depressants as a placebo are not as safe as homeopathy.

  8. Peter Lipson says:

    Prior probability can be taken into consideration, and it actually always is — no one devotes time, effort and resources testing a theory they believe is absurdly implausible.

    you are far too kind and optimistic. NCCAM is only one example of an agency tasked to testing absurdly implausible claims.

  9. wondering says:

    “NCCAM is only one example of an agency tasked to testing absurdly implausible claims.”

    From your point of view maybe, but obviously not from the point of view of the researchers it funds. Scientists are not always in agreement, and in fact they often disagree. That’s how it should be.

  10. atomato says:

    @Karl Withakay: thanks for your response regarding the subtleties associated with a fibromyalgia diagnosis.

    regarding the your comment about the placebo effect, however – one of my neuroscience professors (extremely distinguished in his field) once described the placebo effect as being a category that modern medicine lumps some phenomena into that they do not yet know how to explain.

    there are many things that are unknown (probably much more than is known). we are at a continually expanding frontier, and must remain humble.

    i’m not convinced that all phenomena experienced by humans can be measured and described by the tools developed by science, and i see no reason to assume why they should. i have a deep level of respect and awe for that which scientific inquiry has shown us so far. but, with all due respect (and there is much due, of course) – assuming that the scientific method + materialistic worldview can and will provide a complete explanation for everything sounds just as dogmatic to me as pure, blind faith in a religion. I think it is often quite implicit, though – many people do not realize that this is what they are doing. Most of us have been brought up in a highly technophilic society, and it can be difficult to see the subtle memes that shape our perspective.

    The way that people use the concept of ‘prior probability’ to discount some research is sometimes a consequence of this standpoint.

    thank you for a well-written article.

  11. OZDigger says:

    Another example of the Scientific use of placebo, showing that they work, and cause fewer problems than antibiotics.

    Antibiotic treatment of acute otitis media in very young children
    The authors of this paper point out that guidelines recommend prescription of antibiotics in children with severe acute otitis media and in those under 2 years of age with bilateral acute otitis media or acute otorrhoea. For most other children with acute otitis media, initial observation is recommended. Such prescribing may shorten the course of the illness but may tend to over treatment. Their prospective trial involved 168 children aged 6 months to 2 years with acute otitis media in 53 general practices in the Netherlands. Half were treated with amoxicillin 40 mg/kg/day and the other half with placebo.
    After 3.5 years they found that acute otitis recurred in 63% of the amoxicillin-treated group and 43% in the placebo group. Subsequent referral for secondary care was necessary in 30% of both groups. Their conclusion was that antibiotics are overused in such patients and should be used more judiciously.
    BMJ 2009;338:b2525dol.10.1136/bmj.b2525

  12. superdave says:

    A word about P-values,
    Yes it is true that a low people value means the odds are low that your data occurred purely by chance, but the T-test doesn’t know how you took your data. It is possible that there are variables which affected the outcome of the data that are merely artifacts of how the data was taken. A good example of this was a paper which came out a couple of years ago that showed the results of a well known experiment on the concept of cognitive dissonance may well be due solely to an artifact in the analysis of the probabilities involved in the experiment. Steve Novella blogged about this paper if you want to look for it.

  13. superdave says:

    oh man its late, obviously i meant p value and not people value

  14. wondering says:

    superdave,

    A low p value doesn’t tell you that the experiment was well designed. A stupid experiment can have a low p value, but it’s still stupid and does not answer the question it claimed to ask. But if an experiment is sensibly designed and the treatment group differs from the control group, and the difference is probably not because of random variance, then you can’t discard the result simply because you feel it’s implausible.

  15. Scott says:

    Actually no. Let’s say your p value (the probability of the results occurring by chance) is .001 — then no matter what the prior probability might happen to be (and different people will disagree about prior probabilities), a false positive is still unlikely.

    This is simply false. Even with a p-value of 0.00001, a false positive can STILL be the most likely explanation if the prior probability is sufficiently low.

    Prior probability can be taken into consideration, and it actually always is — no one devotes time, effort and resources testing a theory they believe is absurdly implausible. And no one bothers testing a theory that is highly plausible and already accepted.

    Also false. Most CAM studies simply ignore prior plausibility. Whereas good scientists often test absurdly implausible theories (one of my friends in grad school was looking for Lorentz violation, which is about as absurdly implausible as it gets – and yes, he and our advisor agreed that it was absurdly implausible, but it still needed to be checked), AND test highly plausible and already accepted theories (the majority of the field of high energy physics is devoted to attempting to disprove the Standard Model).

  16. wondering says:

    “Even with a p-value of 0.00001, a false positive can STILL be the most likely explanation if the prior probability is sufficiently low.”

    Only if the prior probability is known to be low because the hypothesis has been sufficiently tested. If 100 experiments say theory A is false, and 1 experiment says it’s true, then a false positive is likely no matter how low the p value. But if the prior probability is assumed to be low because the theory is not generally accepted, but has not been carefully tested, then a false positive is unlikely.

    You can only calculate the prior probability if an adequate amount of high quality research has already been done. Otherwise it’s subjective and not quantifiable.

  17. Scott says:

    The prior probability can also be known to be low because, if it were true, it would invalidate a great deal of what’s known in the field – a situation which results in most experiments in the field in question effectively serving as an indirect test of the hypothesis.

    This is very much the case for most of the typical CAM modalities. Homeopathy and reiki are the most glaring examples (being completely contradictory to the entire body of biology, chemistry, AND physics), but chiropractic and acupuncture also fall into the class of “if they’re correct then most of what we believe we know about anatomy, physiology, and biology is wrong”.

  18. wondering says:

    “The prior probability can also be known to be low because, if it were true, it would invalidate a great deal of what’s known in the field”

    You mean if it contradicts certain assumptions, which is not the same as invalidating known facts. It doesn’t seem at all fair to discount any experiment that seems to verify any CAM treatment, just because the theory violates your sense of how things ought to be. Not that I believe in the CAM theories. But when RCTs seem to verify them, you have to accept it as evidence. Otherwise, there is no point in anyone doing RCTs.

    Will you only accept evidence that confirms your prior assumptions? That would be the opposite of the scientific method.

  19. Scott says:

    I’m referring to things such as the fact that reiki working would require a new type of physical interaction, having a strong effect in macroscopic situations. Such an interaction would be readily observed in other experiments (e.g. those measuring the parameters of the Standard Model); therefore the fact that it is NOT so observed is strong evidence that the prior probability is exceedingly low.

    So no, I most definitely do NOT mean certain “assumptions.” I mean the fact that all sorts of experiments would already have detected qi/Innate/memory of water if in fact they existed and had the claimed effects. This constitutes exceedingly strong evidence that they do not, in fact, exist, and therefore the prior probability must be judged as infinitesimal.

    Now, one might claim that there is some effect of (say) acupuncture that has only to do with poking the skin and not qi; however this is functionally equivalent to acknowledging that acupuncture is complete hokum – because the principles on which it is proposed to work, and the way it is applied, have no validity.

  20. trrll says:

    So it would appear that the use of placebos is alive and well in science based medicine these days. Anti-depressants as a placebo are not as safe as homeopathy.

    That might be relevant, if antidepressants were given as placebos. But they are not. A placebo, by definition, is biologically inactive. Antidepressants are given as active drugs because they have been found to work better than placebos.

    To the extent that there is a placebo effect that is not due to statistical artifacts such as regression to the mean, the placebo effect may reinforce the biological effect of the active ingredient in a drug. That would certainly be a good thing, but it is not the reason for giving the drug.

    By the way, what is the evidence for the “safety” of placebos? I find it curious that placebo advocates assign credit to the placebo for beneficial effects, yet hold it blameless for the long list of adverse effects, some of them serious, that routinely crop up in the placebo arm of controlled trials.

  21. trrll says:

    Let’s say your p value (the probability of the results occurring by chance) is .001 — then no matter what the prior probability might happen to be (and different people will disagree about prior probabilities), a false positive is still unlikely.

    Assuming, of course, that there are no unrecognized biases or errors in the experiment. After all, a low p value only tells you that two treatment groups are likely different; it doesn’t tell you what that difference is. And assuming also and that the statistical assumptions upon which the calculation of p value is based are correct. For example, it is often implicitly assumed that he distribution of error is Gaussian, even when there is insufficient data to adequately test that assumption.

    I’d speculate that it is a rare experiment when the likelihood of some kind of error is below 0.001. So one has to take very low p values with a grain of salt, particularly when the experiment has not been replicated, preferably by different investigators.

  22. wondering says:

    “one has to take very low p values with a grain of salt, particularly when the experiment has not been replicated, preferably by different investigators.”

    I already said all that. A low p value simply means (if the statistics were done correctly) that the between group difference was probably not the result of random within group variance.

  23. kausikdatta says:

    trrll:

    For example, it is often implicitly assumed that he distribution of error is Gaussian

    I have to disagree with that. There are no ‘implicit’ assumptions. You either have Gaussian distribution or not. This needs to be tested every time before you can decide whether you’d use a parametric test or not.

    wondering, what you say is reasonable:

    It doesn’t seem at all fair to discount any experiment that seems to verify any CAM treatment, just because the theory violates your sense of how things ought to be… But when RCTs seem to verify them, you have to accept it as evidence.

    However, as Scott says, plausibility determination prior to the study is very critical in later interpretation of the observations. A statistical significance (which is just a means of analyzing numbers) does not always predicate biological significance (which depends upon the interplay of physiological parameters). CAM theories like Reiki and Water memory are biologically implausible (based on our existing knowledge of anatomy, physiology, biochemistry and physics). Therefore, positive findings coming out of CAM studies must be subjected to close and greater scrutiny.

    This is not unique to CAM; any time a new paradigm or hypothesis – that goes against existing knowledge – is propounded, in science it is always put up for stricter oversight, and parameters, such as repeatability of experiments, specificity of outcomes, confirmation from multiple approaches etc., are rigorously tested. Why should CAM be any different? If CAM therapies indeed work, why would they not be testable, why would they always need some sort of special pleading – mostly to justify their lack of difference in efficacy compared to placebo?

  24. wondering says:

    “Any time a new paradigm or hypothesis – that goes against existing knowledge – is propounded, in science it is always put up for stricter oversight”

    Yes of course. I said that if 100 experiments showed that theory A is probably false, and one experiment showed it’s probably true, then theory A is still very much in doubt. But the original post said if a theory is implausible, then any experiment that seems to verify it is most likely a false positive.

    That statement is much too vague and general. You have to define prior probabilities so they are quantifiable. How many experiments say “yes” vs. how many say “no?” What is the quality of the experiments?

    That’s why meta-analyses are done, to gather the results of experiments done by many different researchers.

    We can’t define prior probabilities as some kind of gut feeling that tells you “fairy dust can’t cure warts.” Science should consider the evidence, and gut feelings are suspect.

  25. pmoran says:

    Wondering –

    “That statement is much too vague and general. You have to define prior probabilities so they are quantifiable. How many experiments say “yes” vs. how many say “no?” What is the quality of the experiments?

    That’s why meta-analyses are done, to gather the results of experiments done by many different researchers.

    We can’t define prior probabilities as some kind of gut feeling that tells you “fairy dust can’t cure warts.” Science should consider the evidence, and gut feelings are suspect.”

    PM> I am not sure what you class as a gut feeling and what you mean by “the evidence”. What about CAM propositions that are so inherently silly that no scientist has ever bothered to test them out?

    In fact, often the failure of proponents to perform simple, obvious, NECESSARY studies before announcing core claims is one of the reasons for not taking claims seriously, especially when the pseudoscientists also cut corners by looking for effects upon subjective and other unstable outcomes in tricky, indirect tests such as clinical studies, knowing that this is an easier way of obtaining a few “positive” results.

    I maintain that there are usually sound reasons for the kind of implausibility we are talking about and that it is certainly extreme enough to cancel out high P levels in certain types of study.

    Here is a “scientific” observation that everyone has made in their kitchens. Dilution and succussion NEVER enhances the biological or physical properties of a solution. Everyone knows this for a fact. No reader will hurry off to do the relevant experiments with their coffee or pharmaceuticals.

    Yet this kind of universal human experience can be overlooked when we permit wrong impressions to prevail as to what “science” is. In truth, we don’t know what it is is — we only know that certain modes of thought are helpful in getting closer to truth about the world.

  26. kausikdatta says:

    @Peter:

    Dilution and succussion NEVER enhances the biological or physical properties of a solution. Everyone knows this for a fact. No reader will hurry off to do the relevant experiments with their coffee or pharmaceuticals.

    Quite right. Actually, it should be added that wherever and whenever such experiments have been performed, it has always shown that dilution, indeed, reduces the biological properties; for example, when an antimicrobial drug is diluted serially, there is a point at which it stops working on its target microbe.

    As far as I know, the homeopathic principle is supposed to work on the basis of molecular mimicry; the active principle or the drug is supposed to stimulate the same symptoms as the disease, and therefore, use of smaller quantities may actually sensitize or immune-modulate the host, so that the host mounts a stronger response to the disease condition.

    What makes no sense absolutely is that weird principle of very large dilutions leaving virtually no molecule of the drug in the solution. A 6X dose reflects 10^6 times serial dilution, which probably is not so bad as far as the presence of the drug is concerned, but it is supposed to increase in ‘potency’ with further and further dilutions, which is a physical impossibility. Homeopaths often prescribe 30X and 200X doses!!

    What also is extremely dubious is their insistence on a weird mind-body ‘axis’ for virtually every disease, including infectious ones, so that homeopathic medicines are ‘personalized’ for individual patients based on their ‘personality’ and ‘history’. In most cases, this represents a gross ignorance of physiology in health and disease, as well as pathogenesis.

  27. kausikdatta says:

    By ‘Peter’, I meant Peter Moran – the commenter above mine.

  28. wondering says:

    “What about CAM propositions that are so inherently silly that no scientist has ever bothered to test them out?”

    What about the germ theory of disease, which was considered so ridiculous and implausible that for a long time no one wanted to waste time testing it? Lots of things that we now accept seemed impossible and silly before they were tested. What could be more implausible than the theory of relativity, for example? Scientific progress is full of ideas that seemed ludicrous before they were tested.

  29. AusShane says:

    Ah OzDigger another wonderfully cherry-picked and misrepresented nugget of propaganda. Did you actually read the literature to which the article so blithely refers? For a reasoned understanding and a more balanced view try here.

    http://www.srmhp.org/0201/media-watch.html

    But to summarise :

    “Therefore, contra to some of the media “hype” on this topic, antidepressant research confirms an empirically demonstrated drug-placebo difference, although careful examination of this literature reveals that this difference is not nearly as large as most individuals believe, or as many of the pharmaceutical companies would have the public believe. Currently, the methodological problems with antidepressant trials preclude us from concluding definitively that the difference actually indicates specific biological effects of the drugs, as various nonspecific factors have not been adequately ruled out. Until these questions are answered, the media should understand that placebos can be double-edged swords, and that “expectancy” effects can result in harm as well as benefit. In a piece on this topic for the Guardian, a UK newspaper, Jerome Burne (2002) reports that many subjects in Leuchter’s trial (2002) relapsed and requested to be placed on the active medication after learning they were in the placebo arm. Vedantam’s Washington Post piece is similar to other articles on this topic that have appeared in the popular press recently, in that it occasionally betrays an imbalanced presentation of the evidence. The media should continue to follow this complicated debate and report on it responsibly, making certain not to overhype the “power” of placebo and, as a consequence, the “powerlessness” of antidepressants.”

    It would appear that a proportion of people with diagnosed MILD depression show a response to depression that could be attributed to the placebo effect. For serious depression and other forms of psychosis the results show the medications are significantly more effective.

    So what should this tell us? Well that we should not not rely on a single authority to provide proof of effectiveness. That some medications show promise in clinical trials and do not always fulfil that promise in post clinical studies ie the real world.
    In other word that science based medicine is effective in proving what is clinically relevant and what is not. The fact that we perform rigorous and continuous challenges to established ideas and therapies allows us to continue to use what is effective and discard what is not. If something is shown to be not clinically useful ideas will change and new approaches will be developed.
    That is the strength of the empirical method, its based on real world results. Sometimes we don’t like what those results tell us, but in the end as a community we have to accept reality and move on to something more effective or safer or wherever the evidence leads us.

    And finallt to quote you
    “Dr. McDiarmid added, pointing out that over half of the people who respond to antidepressants do so because of the placebo effect.”

    As opposed to 100% of patients who take homeopathic preparations? When did the world of Homeopathy last challenge its preconceptions with real world evidence? When was the last time a therapy was abandoned or modified because of rational scrutiny of its effects?? Hell these things are not even tested for safety or efficacy BEFORE they are marketed, let alone followed up over the years.

    The very fact that science based medicine continues to challenge itself is the best way forward we have, its not perfect and never will be. At least we can say we that have good reasons to believe something works, or to stop using it when its proved it doesn’t live up to expectations. What say homeopathy???

  30. Harriet Hall says:

    wondering,

    “Scientific progress is full of ideas that seemed ludicrous before they were tested.” I”m wondering… would you recommend that we indiscriminately test every idea that comes along – first come first serve for research funds – or should we exercise some judgment in how to spend research dollars? And if you recommend using some judgment, what criteria would you suggest.

  31. HelenSan says:

    Wondering, I wanted to thank you for making an exceedingly good point eloquently.

    Scott: “therefore the fact that it is NOT so observed is strong evidence that the prior probability is exceedingly low.”

    The fact that X phenomenon has not been observed to date could also be because 1) no one has bothered looking, period, 2) no one has bothered looking with the right technology and instruments, 3) no one has bothered looking under the right conditions and in the right places, and/or 4) no one has bothered looking with the right operational definitions, research design, and methodology.

    That is how science makes brand new observations never made before, which as you know, happens all the time.

    It is only our personal investments in current paradigms that assume X is implausible or improbable because X does not fit well in said paradigms. But paradigms do shift. Anomalies considered implausible now may be explained by different premises and mechanisms we have yet to discover.

    If one wishes to ignore anomalies until such time, go right ahead. But attacking good experimental research on anomalies just because they suggest our current paradigms are not perfect–well, that might be construed as narrow-mindedness.

  32. wondering says:

    “should we exercise some judgment in how to spend research dollars?”
    It would obviously be ridiculous not to exercise judgement about what ideas and treatments are plausible enough to deserve funding. Theories that have been tested and convincingly discredited can usually be ruled out, unless the researcher provides good reasons for re-testing them in different ways. Theories that have not been well-tested but seem hare-brained are usually discarded.

    But that’s when it becomes subjective. Different ideological groups and subcultures have different ideas about what is or is not plausible, and that’s where politics inevitably gets involved. Science can never live up to its ideal of seeking truth free of authoritarian constraints, because science needs money.

    So we have CAM versus mainstream science, for example, fighting over resources. I don’t see how it could be otherwise.

  33. A. Noyd says:

    > HelenSan: “But attacking good experimental research on anomalies just because they suggest our current paradigms are not perfect–well, that might be construed as narrow-mindedness.” wondering: “So we have CAM versus mainstream science, for example, fighting over resources.” <

    You mean CAM versus science. There is no alternative science that CAM uses.

  34. Joe says:

    @ wondering on 26 Sep 2009 at 9:11 am “So we have CAM versus mainstream science, for example, fighting over resources.”

    That only make sense if one subscribes to the idiotic (new-age, post-modern) notion that every idea is equally plausible. The NCCAM was established because sCAM notions cannot, factually, compete with scientific ideas.

    It is no surprising that you are wondering, the world must be a confusing place when everything is mysterious and possible.

  35. A. Noyd says:

    Gah, my reply to HelenSan got eaten. Trying again:

    And post-modernism makes people 38% stupider but 74% more unaware of it. Perhaps we should research the anomaly of low intelligence and obliviousness among people who throw about terms like “current paradigms” or who suppose that “the right technology and instruments” will reveal water gains magical powers when you shake and dilute it a few thousand times.

    The problem you are so blithely overlooking is this–even if something like homeopathy is plausible under “some other paradigm,” its proponents are claiming that they can tell it’s effective right here and now. You cannot say that a treatment can both be observed to have an effect on the body and that it cannot be tested for scientifically via controlled observation.

    If research shows, without exception, that treatments like homeopathy, reiki and chiropractic, which work by implausible mechanisms, do not have real world effect beyond placebo (which they do not), then it’s not clinging to any “paradigm” to expect the pattern to extend to other similarly implausible treatments. Furthermore, if these “anomalies” you speak of are regularly amplified the more poorly controlled the research (which they are), that would indicate there is some effect of our psychology at work, and not some effect of the treatment.

  36. pmoran says:

    “It is only our personal investments in current paradigms that assume X is implausible or improbable because X does not fit well in said paradigms.”

    Occasionally this might be (approximately) true. Yet I suggest that among the sciences, medicine is is peculiarly prone to throw up what I have referred to as “inherently silly” alternative theories of illness. No other branch of human knowledge is so afflicted.

    Have you never asked yourself why this should be so — and wondered about the likely reasons for this profusion of opposing theories?

    Even if you haven’t, and wish to cling to the notion that some of the present crop might be true, you surely understand that they cannot ALL be true, except in the trivial but very explanatory sense that they all evoke placebo and other non-specific positive responses to medical interactions.

    You surely also see why there are sound reasons why physicians should have a high level of skepticism about “alternative” ideas, especially those that conform to a usual stereotype as often outlined on this blog (a feature of which is too much reliance upon “it is only your biases that prevent you accepting our truths” rather than seriously trying to firm up the evidence).

    The issue in my mind is not whether or not these methods can serve as medicine in many senses, — they clearly can. You are posing a different question with your appeals for more consideration /research i.e “do they have anything useful to contribute to medical knowledge?” The right answer is still “almost certainly not”, in some instances despite hundreds of years of trying.

  37. wondering says:

    [physicians should have a high level of skepticism about “alternative” ideas]

    Yes they should, but they should also be skeptical about ideas that have been accepted by mainstream medicine. How skeptical were physicians when they prescribed HRT for women for decades, although it had not been thoroughly tested for safety or effectiveness? The reason medicine is prone to this kind of mistake is that patients are desperate for cures and treatments, and MDs would like to provide them. But in so many cases, nothing is available.

  38. Joe says:

    wondering on 27 Sep 2009 at 8:49 am “How skeptical were physicians when they prescribed HRT for women for decades …”

    I guess they were skeptical enough to conduct a large study which suggested limiting the use of HRT.

  39. wondering says:

    “I guess they were skeptical enough to conduct a large study which suggested limiting the use of HRT.”

    That was after prescribing it to millions of women for decades.

  40. Harriet Hall says:

    “That was after prescribing it to millions of women for decades.”

    It had been tested for safety and effectiveness. It was prescribed because it worked for menopausal symptoms – better than any other treatment. Because it reduced the risk of osteoporosis. Because early studies indicated a possible benefit in cardiovascular disease. We knew there was a tradeoff between risks and benefits, and the recent large studies just shifted the balance further toward the risk side, leading to a change in practice. Incidentally, HRT does not increase the overall death rate: it increases the risk of some conditions and protects against others.

    The lesson to be learned here is that science is self-correcting. Alternative remedies have not been given to millions of people for decades. They might turn out to have unexpected risks just like HRT. Alternative medicine does not do the kind of surveillance needed to pick up small risks. Alternative medicine has never said treatment X causes more harm than good so we’ll stop using it.

    Scientific medicine could be described as organized skepticism. Alternative medicine is practically the opposite of that.

  41. A. Noyd says:

    -pmoran: “physicians should have a high level of skepticism about “alternative” ideas”

    -wondering: “Yes they should, but they should also be skeptical about ideas that have been accepted by mainstream medicine. How skeptical were physicians when they prescribed HRT for women for decades, although it had not been thoroughly tested for safety or effectiveness?”

    What a stupefying bit of equivocation you’ve gotten up to here in your juxtaposition, wondering. Watching out for complications in established science-based medicine is nowhere close to being on the same level as maintaining skepticism towards wholly implausible ideas.

  42. HelenSan says:

    A. Noyd: “The problem you are so blithely overlooking is this–even if something like homeopathy is plausible under “some other paradigm,” its proponents are claiming that they can tell it’s effective right here and now. You cannot say that a treatment can both be observed to have an effect on the body and that it cannot be tested for scientifically via controlled observation.”

    All right, let’s look at homeopathy as an example of CAM.

    The “effectiveness” they claim uses the same operational definition that mainstream medicine uses: A close temporal correlation between the intervention and observed positive outcome. While mainstream medicine attributes this correlation to a widely accepted pharmacolgical mechanisms in the biochemical paradigm, homeopathy has no mechanism to offer as an explanation. That is the main difference: known mechanism vs unknown mechanism. The magnitude of the temporal correlations depends on study methodology and design.

    When something is unknown, it is natural to offer other known mechanisms to explain the observations, such as the placebo effect. But what is wrong with the placebo effect? If one obtains the positive outcome desired for pennies on the conventional medicine dollar, why not use something cheap that can reliably produce the placebo?

    And why preclude the possibility that future technology and more advanced instrumentation might reveal the heretofore unknown mechanism that would explain the “effectiveness” of homeopathy–either in addition to the placebo effect or instead of the placebo effect?

    Weren’t tiny invisible creatures that go from person to person causing diseases considered ludicrous and implausible once upon a time–until microscopes were invented?

  43. HelenSan says:

    Wondering: “Science can never live up to its ideal of seeking truth free of authoritarian constraints, because science needs money.”

    I think one step in the right direction would be to blind researchers to funding sources, and to make funding blindness the norm in science. (Michael Crichton proposed this for all politicized research, and I agree.)

  44. HelenSan says:

    A. Noyd: “Watching out for complications in established science-based medicine is nowhere close to being on the same level as maintaining skepticism towards wholly implausible ideas.”

    I agree with Wondering. Skepticism needs to be objectively applied to both popular and unpopular ideas, results that we like with as well as results we don’t like. Isn’t that what science is about? Objectivity?

    What is considered “science” is subjective. Most studies (short of RCT’s) published in medical journals sound like a load of drivel to me. You call it “science,” I call it pseudoscience that doesn’t meet any of the rigors and integrity of true scientific research, but goes through the motions and appearances. What’s that phrase Richard Feynman used? Cargo cult science?

    So people who support medical cargo cult science calling CAM RCT’s “implausible” is, at best, like the pot calling the kettle black. At worst, it’s a disingenuous use of double standards.

    The same, objectively defined standards for what science should be must be applied to all research. Whether one likes the ideology or implications or not.

    Here is an example of medical research accepted by the mainstream that doesn’t meet my standards of good, true science.

    http://freedom2question.blogspot.com/2009/05/what-is-pseudoscience.html

  45. Scott says:

    Scott: “therefore the fact that it is NOT so observed is strong evidence that the prior probability is exceedingly low.”
    The fact that X phenomenon has not been observed to date could also be because 1) no one has bothered looking, period, 2) no one has bothered looking with the right technology and instruments, 3) no one has bothered looking under the right conditions and in the right places, and/or 4) no one has bothered looking with the right operational definitions, research design, and methodology.

    Good grief. Learn to read before you post.

    Such an interaction would be readily observed in other experiments (e.g. those measuring the parameters of the Standard Model); therefore the fact that it is NOT so observed is strong evidence that the prior probability is exceedingly low.

    Either you’re a complete idiot, or your entire post was a deliberate lie. Which one?

  46. weing says:

    “Skepticism needs to be objectively applied to both popular and unpopular ideas, results that we like with as well as results we don’t like. Isn’t that what science is about? Objectivity?”

    That is utter nonsense. Popularity has nothing to do with validity of an idea. Equal skepticism of results that we like and don’t like? We should be especially skeptical of results that we like.

  47. A. Noyd says:

    HelenSan: “That is the main difference: known mechanism vs unknown mechanism. The magnitude of the temporal correlations depends on study methodology and design.”

    No, the main difference is that science-based medicine can demonstrate via strictly controlled observation that its medicines and treatments are more than correlations and are effective beyond placebo. Homeopathy, like all CAM treatments with implausible mechanisms, cannot.

    “When something is unknown, it is natural to offer other known mechanisms to explain the observations, such as the placebo effect.”

    Do you even understand what the placebo effect is? If people getting fake homeopathy show the same level of relief as people getting “real” homeopathy, then the mechanism at work is indistinguishable from suggestion (or more likely is suggestion). Homeopathy thus is a placebo.

    “But what is wrong with the placebo effect? If one obtains the positive outcome desired for pennies on the conventional medicine dollar, why not use something cheap that can reliably produce the placebo?”

    You have a point only if the “desired outcome” from taking a medication or receiving treatment is always superficial relief of symptoms. There might be a place for placebos, but good luck finding a way to use them ethically since you have to lie to people to make them work. And if a condition calls for a medicine/treatment that does more than this, we’re SOL if we turn to things like homeopathy. Not so for SBM, despite its risks.

    “And why preclude the possibility that future technology and more advanced instrumentation might reveal the heretofore unknown mechanism that would explain the “effectiveness” of homeopathy…”

    WHAT effectiveness? Either it does more than placebo or it doesn’t! If it does, we can measure that in the here and now, regardless of our “paradigms.”

    “Skepticism needs to be objectively applied to both popular and unpopular ideas, results that we like with as well as results we don’t like. Isn’t that what science is about? Objectivity?”

    We’re not talking about popular vs. unpopular. We’re talking about medicine that is supported by plausible mechanisms and valid research vs. quackery that relies on implausible mechanisms and terrible research. My point to wondering is that skepticism over the safety of things that have a genuine more-than-placebo effect like HRT and skepticism towards things that have no plausible mechanism are two completely different things. Yes, both are necessary, but he’s implying they’re the same sort of thing. It’s like saying one must be equally skeptical about the reliability of vacuum cleaners and the existence of brownies who use magic to clean your house if you leave biscuits out for them.

    “What is considered “science” is subjective.”

    Only by people who have all their intelligence milked out by the stupidity of post-modern relativism. Science strives to minimize bias and elimiate circular reasoning. It works because it doesn’t merely support the things we want to believe in. You seem to have a warped understanding of science, either because you don’t understand it or you feel your beliefs are threatened by it, or both.

    “So people who support medical cargo cult science calling CAM RCT’s “implausible” is, at best, like the pot calling the kettle black. At worst, it’s a disingenuous use of double standards.”

    When CAM can coherently explain the mechanisms it supposedly operates by, accepts the results of unbiased tests of those claims, and shows its treatments have an effect beyond placebo, we’ll talk.

    “The same, objectively defined standards for what science should be must be applied to all research.”

    What objectively defined standards? Spit them out. I want to see what standards of science you use that excludes science-based medicine from being scientific.

  48. Charon says:

    wondering: as Scott and others have pointed out, the whole point of SBM is that it draws on knowledge gained from other disciplines. Physics is now complete as far as phenomena a human could experience (breakdowns in our knowledge occur at the centers of black holes, the beginning of the universe, and energy scales far, far hotter than the center of the Sun). There are some proposed treatments that contradict the known laws of physics. The point is, if one postulates, e.g., a “qi” force strong enough to interact noticeably with humans, this idea can immediately be dismissed without an RCT. Because we actually understand physics, chemistry, etc. better than that.

  49. HelenSan says:

    A. Noyd: “What objectively defined standards? Spit them out. I want to see what standards of science you use that excludes science-based medicine from being scientific.”

    One principal feature of the scientific method is the use of well-controlled experiments. Medicine might even go through the motions of experimentation, but actually control for almost nothing at all. Sometimes having good controls is legitimately constrained by ethics. But other times, the only reason I can see for the absence of good controls is incompetence or duplicity.

    Either way, if one either cannot have good controls for ethical reasons or choose not to have good controls for logistical or other reasons, the research cannot be called science. That is not to say the research doesn’t have value at all, but results from such research has to be very cautiously interpreted and even more cautiously generalized.

    Ethics is not a valid excuse for lowering the standards so that uncontrolled crap can pass for “science.” Ethics just means that some problems cannot be scientifically studied, period.

    For one example, I again refer to my previously linked article, “What is pseudoscience?”

    http://freedom2question.blogspot.com/2009/05/what-is-pseudoscience.html

    If you need other examples, give me the full text of any medical study you choose, and I will tell you what is wrong with it. But I’ll tell you how I do it, in case you care. I just read the study, write down the research design and structure, change the medical intervention to “homeopathy” or “pixie dust,” read it again, and voila–the flaws in the methodology and design come leaping out. It works like magic.

    See, you guys KNOW what is wrong with these studies, because you rightfully apply them to CAM research. But what I have discovered is that most SBM proponents have a double standard. The very same flaws in conventional medical research are overlooked with a very forgiving eye. Skepticism is rigorously applied to results you guys don’t like, but leniently applied to results you do like.

    All I am saying is skepticism should have no double standard, no preferential treatment for paradigm conformists.

  50. HelenSan says:

    Charon: “Physics is now complete as far as phenomena a human could experience … Because we actually understand physics, chemistry, etc. better than that.”

    That is an assumption that some physicists I know would not agree with.

    The assumption that we know almost everything there is to know, and that our paradigm will never shift is not a healthy one for the advancement of science. It will stagnate a methodology that is meant to be continuously self-correcting and dynamic.

  51. HelenSan says:

    Scott: [quote]Such an interaction would be readily observed in other experiments (e.g. those measuring the parameters of the Standard Model); therefore the fact that it is NOT so observed is strong evidence that the prior probability is exceedingly low.

    Either you’re a complete idiot, or your entire post was a deliberate lie. Which one?[/quote]

    You know, that is called a forced-choice fallacy. :)

    I stand by my original statements. The assertion that “interactions” would be “readily observed in other experiments…measuring the parameters of the Standard Model” is only true if such interactions were looked for, looked for in the right time and place, and/or looked for with the right methodology and instrumentation. Etc.

    Incidentally, it would make the debate environment more pleasant if I didn’t have to wade through ad hominem attacks (which are also fallacious). Thank you.

  52. atomato says:

    Harriet Hall:
    “The lesson to be learned here is that science is self-correcting. Alternative remedies have not been given to millions of people for decades.”

    correct. some forms of “alternative” medicine (i.e. Chinese medicine, Ayurveda, Tibetan medicine) have been used for millennia.

    Harriet Hall: “Alternative medicine has never said treatment X causes more harm than good so we’ll stop using it.”

    incorrect. there examples of substances in the Chinese pharmacopoeia whose use has been discontinued once they discovered that their potential toxicity outweighed their therapeutic benefits.

    Harriet Hall: “Scientific medicine could be described as organized skepticism. Alternative medicine is practically the opposite of that.”

    there’s a difference between skepticism and dogma. unfortunately the meanings of these two concepts are often transposed.

    some forms of what you are calling alternative medicine and how it is practiced may be the opposite of organized skepticism, but you are generalizing. again, I draw your attention to the Chinese pharmacopoeia, a catalog of medicinal substances that has been consistently reviewed and revised for centuries, based on clinical efficacy and potential side effects. I realize that yours or another commenter’s definition of ‘clinical efficacy’ and how to measure this may currently be at odds with the metrics used by Chinese medical practitioners.

  53. Harriet Hall says:

    automato,

    While millions may have used Chinese medicine, I question whether any single remedy has been used by “millions” for long periods the way HRT was.

    I am intrigued by your mention of Chinese substances being discontinued from their pharmacopeoia because of potential toxicity. Can you specify which ones and provide references? I have been looking for examples of treatments that alternative medicine gave up on, but so far have not had any luck.

    How were the remedies in Chinese medicine evaluated for clinical efficacy and potential side effects?

  54. HelenSan says:

    A. Noyd: “No, the main difference is that science-based medicine can demonstrate via strictly controlled observation that its medicines and treatments are more than correlations and are effective beyond placebo.”

    RCTs can demonstrate effectiveness beyond placebo controls, true. But how exactly do they show that the effectiveness is more than a statistically significant temporal correlation between intervention and desired outcome?

    ” Homeopathy, like all CAM treatments with implausible mechanisms, cannot.”

    But they have. A good number of homeopathy RCT’s have shown statistically significant temporal correlations (“effectiveness”) OVER the placebo control group. The problem is, those studies have flaws and the results are rightfully criticized.

    My contention is that if SBM proponents use the same standards of criticism toward their own research, we might find less SB proof of effectiveness than previously believed.

    “Do you even understand what the placebo effect is? If people getting fake homeopathy show the same level of relief as people getting “real” homeopathy, then the mechanism at work is indistinguishable from suggestion (or more likely is suggestion). Homeopathy thus is a placebo.”

    Or homeopathy works as well as placebo. Elementary logic, my dear. If vanilla ice cream sales are the same as chocolate ice cream sales, it means vanilla and chocolate are equally popular flavors. It doesn’t mean vanilla and chocolate are the SAME flavor. See?

    “You have a point only if the “desired outcome” from taking a medication or receiving treatment is always superficial relief of symptoms.”

    A lot of conventional medicine works “superficially” as well. You have breast cancer, you cut out the lump. Three years later, you get it back, you cut out another lump. Same goes for headaches, and allergies, etc. If I can get that kind of superficiality without surgery or pharmacological risks, why not?

    ” There might be a place for placebos, but good luck finding a way to use them ethically since you have to lie to people to make them work. ”

    You don’t have to lie. You can honestly say that this intervention has been observed to be temporally correlated with desired positive outcomes in thousands of people.

    “And if a condition calls for a medicine/treatment that does more than this, we’re SOL if we turn to things like homeopathy. Not so for SBM, despite its risks.”

    You can always try placebo and/or homeopathy BEFORE conventional medicine. If it doesn’t yield the desired outcome, most conditions have plenty of time to try more risky interventions. Hell, it takes that long to get a doctor’s appointment nowadays. While you’re waiting for your appt to come up, why not try something very cheap and possibly effective? If it doesn’t work, you can always go to your appointment. :)

    “WHAT effectiveness? Either it does more than placebo or it doesn’t! If it does, we can measure that in the here and now, regardless of our “paradigms.””

    Placebos are effective too, you know. If my migraine disappears because of the placebo effect, guess what? The migraine is still gone. :) So desired outcomes do not necessarily have to work BETTER than a placebo.

    Now the question of why there are not more RCTs demonstrating a higher correlation between intervention and positive outcome in homeopathy is a legitimate criticism. Homeopaths would say that the outcome is more dependent on the skill of the practitioner than conventional medicine. A good RCT design would use the same practioner to control for skill. But I have not seen that kind of study to date.

    However, that is beside the point Wondering was making. IF a well controlled RCT with relatively few flaws were to demonstrate positive results for homeopathy, then those results should be acknowledged regardless of its “implausibility.” In science, one proceeds with replication using the same exact design and definitions before widespread acceptance. But to dismiss it from the get-go just because of the content, well, that betrays the objectivity of science.

    “We’re not talking about popular vs. unpopular. We’re talking about medicine that is supported by plausible mechanisms and valid research vs. quackery that relies on implausible mechanisms and terrible research. ”

    And what is considered “plausible” and “valid” is a function of its popularity.

    “My point to wondering is that skepticism over the safety of things that have a genuine more-than-placebo effect like HRT and skepticism towards things that have no plausible mechanism are two completely different things. Yes, both are necessary, but he’s implying they’re the same sort of thing. ”

    He’s implying skepticism in both questions requires the same standards.

    “It’s like saying one must be equally skeptical about the reliability of vacuum cleaners and the existence of brownies who use magic to clean your house if you leave biscuits out for them.”

    If you leave biscuits out and wake up to a clean house, then yes, the hypothesis that house-cleaning brownies can be beckoned with biscuits is worth testing. It doesn’t mean they exist, but just because one finds the idea ludicrous is not scientific proof that they don’t. Science will test it out methodically.

    And let’s say, it turns out it’s not brownies but a friendly neighbor who smells the biscuits and sneaks in to eat them, then cleans your house out of guilt for B&E. Then there is the question, if it works, why not? Does it really matter, practically speaking, if you call the house-cleaner “brownie” or “Betty”?

    “Science strives to minimize bias and elimiate circular reasoning.”

    Well, that’s my point exactly. It eliminates bias by applying the same standards to everything, not just things we like. That is why I say skepticism cannot have a double standard.

    “When CAM can coherently explain the mechanisms it supposedly operates by, accepts the results of unbiased tests of those claims, and shows its treatments have an effect beyond placebo, we’ll talk.”

    1) Absence of a known mechanism does not invalidate effects. Those are two separate things. You can use a computer without knowing exactly how it works. Many psychotropic meds work without anyone understanding exactly what they do.

    2) Agreed–so long as tests of those claims are unbiased and attempt to replicate the original finding using identical definitions, design, and methodology.

    3) Again, what is wrong with as much effect as placebo? Let’s say Medication X relieves migraines for 6 months before they returns. Homeopathy remedy Y relieves migraines for 2 months before they return, and placebo relieves migraines for 2 months as well. OK, Medication X is more effective than either placebo and Remedy Y. But it also carries more risks. So why shouldn’t a consumer say, “You know what? I’m going to take Remedy Y every 2 months. Sure I have to take more of it than Med X, but it’s still cheaper in the long run, not to mention safer. What do I care as long as the migraines go away?”

    I don’t want to sidetrack into defending CAMs. My point is that science needs to be objective about significant effects, whether we understand them or not, whether their mechanisms are known or not, whether they work psychologically or not. An effect is an effect, and science shouldn’t care from whence it comes.

  55. pmoran says:

    “The very same flaws in conventional medical research are overlooked with a very forgiving eye.”

    Again sometimes correct, but still shy of the point in relation to plausibility and its influence upon how much weight we apportion to different kinds of evidence.

    There is usually no plausibility problem with pharmaceuticals and medical procedures, and to a lesser extent, with herbs — certainly nothing near that applying to medical systems that rely on undemonstrable, entirely speculative, processes or forces and that look like placebo in all other respects.

    Our awareness of the innumerable possible flaws within the design, conduct, integrity, and interpretation of certain types of clinical studies makes it very difficult for highly implausible ideas to validate themselves SOLELY via that means.

    Plausibility simply means taking into account ALL the relevant evidence.

  56. Harriet Hall says:

    HelenSan disagreed with Charon’s statement that “Physics is now complete as far as phenomena a human could experience.”

    She said “That is an assumption that some physicists I know would not agree with.”
    I had the same initial reaction when I read “physics is now complete” but then I realized the statement was qualified by limiting it to phenomena on a human level. I think that with that qualification, it is essentially true. We understand mechanics, optics, etc. fully enough to use the principles to predict accurately and use the knowledge effectively.

    “The assumption that we know almost everything there is to know, and that our paradigm will never shift is not a healthy one for the advancement of science.”

    No one is assuming any such thing.

  57. weing says:

    So far the only effective traditional Chinese medicines that spring to mind, and I admit limited knowledge, are bear bile for gallstones and semen for peptic ulcers. You are welcome to use them of course. Herbals have been used in western medicine as well. I am old enough to remember learning Galenicals in medical school in Europe. A lot of our current medications have been derived from plants. The problem with herbals has been lack of quality control, contamination, inconsistent potency, due to season, growing conditions, etc. They did learn early on that you, for example, don’t drink hemlock extracts, or eat Amanita phalloides mushrooms. As far as I am concerned, our therapeutics, developed through scientific testing and knowledge of pharmacology, physiology, etc, are more consistently effective and superior to those that you mention and are the rightful heirs of the old standbys. Should we come upon hard times in the future, we may very well have to fall back on the herbals again.

  58. pmoran says:

    “The assertion that “interactions” would be “readily observed in other experiments…measuring the parameters of the Standard Model” is only true if such interactions were looked for, looked for in the right time and place, and/or looked for with the right methodology and instrumentation. Etc.”

    Now ask yourself “But, but — if these processes or forces have never been directly observed, why do we think they may exist? A valid scientific hypothesis has to explain something. ”

    Of course, it all goes back to people claiming to be healed through the application of the undemonstrable technology.

    Yet you have just now explained why proper controls are needed in RCTs, so you already understand what a shaky basis this is for a novel therapeutic hypothesis — unless, of course , consistent, unmistakable, replicable, objective effects of some kind can be demonstrated.

    This is the way out of what you mistakenly regard as an impasse unjustly created by biased skeptics. Without this AM methods still look like differnt flavours of placebo medicine. Not that that is necessarily all bad.

  59. A. Noyd says:

    HelenSan: “One principal feature of the scientific method is the use of well-controlled experiments.”

    Can you explain “well-controlled” in such a way that doesn’t amount to “I’ll know it when I see it”? Because you have spelled out precisely nothing here, merely reiterated your opinion that SBM research is not rigorous. What I don’t know is if you have an understanding of real objective standards and, frankly, your nods to post-modern relativism make me doubt it very much.

    “But how exactly do they show that the effectiveness is more than a statistically significant temporal correlation between intervention and desired outcome?”

    Are you trying to ask if I think the beyond-a-placebo effects that show up in RTCs could be some sort of massive coincidence?

    “A lot of conventional medicine works “superficially” as well.”

    May I ask if you think any medicine or treatment has an actual effect beyond placebo (whether SBM or CAM)? If so, please give a few examples.

    “You don’t have to lie. You can honestly say that this intervention has been observed to be temporally correlated with desired positive outcomes in thousands of people.”

    If the patient believes that the “desired positive outcome” is something more than the placebo effect, then you are lying. And when you’re not outright lying, you’re denying your patient informed consent and playing on his/her misunderstanding of the difference between correlation and causation. That is ethically reprehensible.

    “Absence of a known mechanism does not invalidate effects.”

    My criteria were in response to this: “So people who support medical cargo cult science calling CAM RCT’s “implausible” is, at best, like the pot calling the kettle black.” The context was relative plausibility and now you’re talking effects. You seem to have a habit of shifting the context of our discussion in order to reply to my statements. For this reason, I’m ignoring most of what you said to me. Please don’t interpet this as my inability to answer your manifold responses, I merely feel it’s a waste of my time to counter responses that don’t actually address my point and it’s tiresome to correct you every time you shift the context.

  60. Peter Lipson says:

    Or homeopathy works as well as placebo. Elementary logic, my dear. If vanilla ice cream sales are the same as chocolate ice cream sales, it means vanilla and chocolate are equally popular flavors. It doesn’t mean vanilla and chocolate are the SAME flavor. See?

    No. That’s idiotic. Buningly stupid. It’s a conflagration of dumb. If something is no better than placebo, that doesn’t mean it’s “as good as” placebo, but that it’s as good as never having used it in the first place.

  61. HelenSan says:

    Harriet Hall: “I had the same initial reaction when I read “physics is now complete” but then I realized the statement was qualified by limiting it to phenomena on a human level. I think that with that qualification, it is essentially true. ”

    And the physicists I know would disagree even with that qualification. Maybe especially with that qualification. I contend we know less about how physical phenomena interact with the human body than we know about how chemical phenomena interact with the human body.

    “No one is assuming any such thing.”

    Anytime someone says “physics is now complete,” I don’t see how this assumption (that we know almost everything there is to know, and that our paradigm will never shift) is not the underlying premise for such a statement.

  62. HelenSan says:

    Peter Lipson: “No. That’s idiotic. Buningly stupid. It’s a conflagration of dumb. If something is no better than placebo, that doesn’t mean it’s “as good as” placebo, but that it’s as good as never having used it in the first place.”

    First of all, wow. Insult much?

    “As good as never having used it in the first place” implies absence of intervention. But that is not at all true.

    Why use a placebo control at all? Because researchers want to make sure the effects observed can be attributed to the pharmacological action and not a psychological one. Because psychological action is well documented in producing a positive effect mimicking pharmacological results. Because the placebo effect is an effect. It is NOT a baseline of no intervention at all.

    Let me ask you this. What do you think you would find if RCTs had 2 control groups? One placebo control group and one do-nothing-at-all group? Would you expect a higher positive outcome in the placebo group than the do-nothing group? Of course you would. That is why the placebo effect is called a placebo *effect.*

    Now if a CAM remedy can produce the same positive effect, however small, of a placebo, it is better than doing nothing at all. It would be as good as a placebo. Not as bad as doing nothing at all. See?

  63. Mark Crislip says:

    “Let me ask you this. What do you think you would find if RCTs had 2 control groups? One placebo control group and one do-nothing-at-all group? Would you expect a higher positive outcome in the placebo group than the do-nothing group? Of course you would. That is why the placebo effect is called a placebo *effect.”

    I would not expect a better outcome in the placebo group for any objective measurements:

    IS THE PLACEBO POWERLESS? An randomized of Clinical Trials Comparing Placebo with No Treatment N Engl J Med, Vol. 344, No. 21 • May 24, 2001

    “We found little evidence in general that placebos had powerful clinical effects. Although placebos had no significant effects on objective or binary outcomes, they had possible small benefits in studies with continuous subjective outcomes and for the treatment of pain. Outside the setting of clinical trials, there is no justification for the use of placebos.

  64. HelenSan says:

    A. Noyd: “Can you explain “well-controlled” in such a way that doesn’t amount to “I’ll know it when I see it”?”

    Sure thing. Well-controlled means you control for most, if not all, the major confounding variables, not just some of them. At least control for the confounding variables that would render the results completely meaningless.

    My article at my blog, which I have linked to 2x, but which you obviously have not read, outlines some of these major confounders. Changing definitions of dependent or independent variables mid-study. Murky definitions. Excluding samples for murky and ill-defined reasons. Not getting a baseline–a real do-nothing baseline instead of treating the placebo group as a do-nothing baseline. Controlling with pharmacologically inactive substances rather than active ones. No statistical prestidigitation such as relative risk or adjusted values or person-years, which are easily massaged to yield desirable numbers. Or starting with an odds ratio design, but reporting a relative risk statistic, or vice versa. Etc.

    Of course, major confounders in each study would be different, so it is hard to list confounders in general. If you really want to know what I mean about confounders, let’s take a study you think is good science, and I’ll rip it apart for you. Then you’ll see exactly what I mean by absence of rigor.

    ” I don’t know is if you have an understanding of real objective standards and, frankly, your nods to post-modern relativism make me doubt it very much.”

    I apologize if anything I said led you to believe that I subscribe to any type of relativism. In fact, it is very much the opposite. I have very definite and specific ideas about scientific rigor that medical research doesn’t meet. When I say what is “science” in the mind of the beholder, I mean you look at the NEJM and see “science.” I look at the NEJM and see junk science. I am not saying your view is equally right as mine–which would be relativism. I am saying your identification of such publications as science is wrong, and my identification of such publications as crap is right–which is clearly not relativism. I hope I have made that point clear.

    “Are you trying to ask if I think the beyond-a-placebo effects that show up in RTCs could be some sort of massive coincidence?”

    Correlation does not mean causation, but it does not mean massive coincidence either.

    I am saying proof of effectiveness is not equal to proof of causation. Two different things.

    Unless the RCT shows an near 100% effect in the study group, you can’t infer causation. All you can infer is that the study group has a higher correlation than the placebo group. Causation is a complicated process that requires a series of mechanistic studies, and RCTs are not designed to address that complex.

    “If the patient believes that the “desired positive outcome” is something more than the placebo effect, then you are lying. ”

    Why do you have to make the patient believe that?

    Patient has lifelong migraines. You say, “I know a bunch of people have tried X, and right after they have tried it, their migraines disappeared. It might not work for you, but it’s cheap, it’s as safe as water, and it’s worth a try, right?”

    “And when you’re not outright lying, you’re denying your patient informed consent and playing on his/her misunderstanding of the difference between correlation and causation. That is ethically reprehensible.”

    Why deny your patient informed consent? Tell him it’s untested if you want. Tell him there are plenty of very well tested alternatives if you want.

    Here is the bottom line. Your patient with the migraine? He doesn’t care if it is correlation or causation. He just wants his migraines gone. And if it’s gone, everyone is happy. If it is not gone, he can try the next thing. What is ethically reprehensible about offering an alternative that is safe, cheap, and just might work?

    Psychiatrists do it all the time with psychotropic meds, for example. They don’t know exactly how those meds work. The mechanism is not understood. But they sell them anyway, because their use is correlated with certain desired outcomes.

    “The context was relative plausibility and now you’re talking effects. You seem to have a habit of shifting the context of our discussion in order to reply to my statements. ”

    I am saying, what does known vs. unknown mechanism matter if the patient gets better (effect)? Why does “plausibility” matter if the patient gets cured? Because it certainly doesn’t matter to the patient.

    As far as shifting context goes, it was not my intention. You bring up effects, I answer it. If you notice, I did wrap it all up in the plausibility context.

  65. HelenSan says:

    Mark Crisp: “I would not expect a better outcome in the placebo group for any objective measurements:”

    Interesting article. Thank you.

    So which is it? Is the placebo powerless? Or does it have the power to explain any positive effects of unknown mechanisms? If the placebo is powerless, than how would you explain it when CAM RCTs show positive effects over the control group?

  66. weing says:

    “So which is it? Is the placebo powerless? Or does it have the power to explain any positive effects of unknown mechanisms? If the placebo is powerless, than how would you explain it when CAM RCTs show positive effects over the control group?”

    Take 2 identical bottles of wine. Label one $5 and the other $100. The $100 dollar bottle tastes better and is enjoyed better.

  67. pmoran says:

    Helenscan: “So which is it? Is the placebo powerless? Or does it have the power to explain any positive effects of unknown mechanisms?
    If the placebo is powerless, than how would you explain it when CAM RCTs show positive effects over the control group?”

    I think you are playing with us. You claim to be able to be able to find terminal flaws in any paper published in the NEJM, yet you wonder how we might explain the occasional positive results with usually subjective outcomes using implausible CAM methods? Are you implying a better standard of research in CAM?

    WRT placebo, there is still much to learn. The evidence is consistent with a range of possibilities depending on the clinical setting. The Hrobjartsson et Al article that Mark referred you to cannot be taken as the last word, because the typical placebo-controlled RCTs upon which it is based seriously dampen placebo influences. For one thing the subjects are not expected to know if they are supposed to getting better or not. .

    We do expect placebos to have very limited effects upon objective aspects of disease, as Mark says..

  68. Scott says:

    I stand by my original statements. The assertion that “interactions” would be “readily observed in other experiments…measuring the parameters of the Standard Model” is only true if such interactions were looked for, looked for in the right time and place, and/or looked for with the right methodology and instrumentation. Etc.

    Utterly false. Such interactions would have direct effects on the parameters being measured, which would therefore not be as predicted.

    Incidentally, it would make the debate environment more pleasant if I didn’t have to wade through ad hominem attacks (which are also fallacious). Thank you.

    Can’t have a debate if you refuse to read.

  69. Peter Lipson says:

    Calling idiocy “idiocy” is not an ad hominem fallacy but a statement of truth. It may not be a pleasant truth if you are the recipient, but it is still truth. Or, as one guy says, “A statement of fact cannot be insolent”.

  70. HelenSan says:

    Peter Lipson: “Calling idiocy “idiocy” is not an ad hominem fallacy but a statement of truth.”

    Even if it is true, it furthers no argument of substance to the debate and adds to the decompensation of the debate environment. If this is the type of environment that you guys support, where dissent is subject to personal vituperation, I will happily bow out of here and leave you to your unchallenged rantfest.

  71. HelenSan says:

    pmoran: “I think you are playing with us. You claim to be able to be able to find terminal flaws in any paper published in the NEJM, yet you wonder how we might explain the occasional positive results with usually subjective outcomes using implausible CAM methods? Are you implying a better standard of research in CAM?”

    Oh no! CAM research is just as bad if not worse. It is just that whenever a positive result is reported in CAM, the first thing I hear is “That’s just placebo.” So I am just wondering, if placebo is powerless, how did it have the power to produce all those positive results for which placebo was assigned credit?

    Now if you hadn’t said “That’s just placebo,” and had explained those positive results with only methodological flaws to begin with, I wouldn’t be asking the question. It just seems inconsistent to me to attribute results to the power of placebo, then turn around and say the placebo is powerless.

  72. weing says:

    “It just seems inconsistent to me to attribute results to the power of placebo, then turn around and say the placebo is powerless.”

    Actually, some believe placebos are getting stronger, as in a recent Wired article and it was blogged here too.

  73. HelenSan says:

    “Our awareness of the innumerable possible flaws within the design, conduct, integrity, and interpretation of certain types of clinical studies makes it very difficult for highly implausible ideas to validate themselves SOLELY via that means.

    Plausibility simply means taking into account ALL the relevant evidence.”

    Thank you for acknowledging the possibility of innumerable flaws. I appreciate that–gives us a little bit of common ground.

    Your definition of plausibility is more palatable to me. However, I still insist, to the patient, it is still irrelevant.

    Example. True story with identity changed, obviously. Mr. X is having painful gallbladder attacks about 4 times a year for 1.5 years. The pain is excruciating and sometimes sends him to the ER for pain relief. The docs recommend gallbladder surgery but he can’t afford it. He tries homeopathy for 3 months, and lo and behold, his gallbladder attacks have stopped with no change in diet, going on 3 years and counting. Ultrasounds show he still has gallstones, but they don’t bother him anymore.

    Guess what? Mr. X loves homeopathy. He doesn’t care that the mechanism is unknown and is likely to be the placebo effect. He doesn’t care that homeopathy is implausible in context of all the relevant evidence to date. He just knows he doesn’t suffer excruciating pain anymore, his trial with homeopathy was time limited, and he doesn’t even have to watch how much fat he eats.

    If you ask him, he’ll just say, “Maybe they’ll figure out in the future how this works. Not knowing doesn’t take away from the fact that it worked for me. ”

    Maybe that is why I keep harping about effects.

    1) Why take away an alternative for the patient that can possibly relieve suffering–just because it doesn’t fit well into the current body of knowledge?

    2) And to bring it back to Wondering’s point, why not acknowledge the positive effects when they do show up, objectively and impartially, as you would even if exactly the same thing happened with conventional medicine?

    3) Why not allow that while it is currently implausible, it might not always be implausible as science advances in the future?

    4) Why must “implausibility” trump the observation of effect, and automatically invalidate Mr. X’s relief and the entire CAM discipline?

  74. weing says:

    Similar true story to Mr. X. Pain goes away without any treatment. Ultrasound reveals thickened GB wall but he repeatedly declines surgery. After several years of no symptoms develops jaundice and has a cholangiocarcinoma, inoperable, and dies from it.

  75. Harriet Hall says:

    HelenSan,

    “It worked for me” is not an accurate statement. The patient may be committing the post hoc ergo propter hoc fallacy. He can only say “It was temporally associated with my improvement.” The only way to tell if a treatment really worked is to rule out both placebo response and improvement due to the natural course of disease by doing well-designed controlled trials.

  76. A. Noyd says:

    @ HelenSan

    Before I reply, I would request you answer this question, which you left out of your reply: May I ask if you think any medicine or treatment has an actual effect beyond placebo (whether SBM or CAM)? If so, please give a few examples.

  77. HelenSan says:

    Harriet Hall: ““It worked for me” is not an accurate statement.”

    I agree. It isn’t. But what “work” means to scientists is different from what “work” means to patients. To them, “work” means temporal correlation with improvement. When they say the aspirin worked on my headache, don’t they simply mean to say, “The aspirin was temporally associated with improvement”?

    Definition dispute aside, the fact remains Mr. X is pain-free for 3 years now. Even if that is a placebo effect, why deprive him from it? Why deprive a patient from the option of a cheap, relatively risk-free trial to be relieved of suffering?

    Harriet Hall: “The only way to tell if a treatment really worked is to rule out both placebo response and improvement due to the natural course of disease by doing well-designed controlled trials.”

    I agree completely. Really, thank you.

    Unfortunately, such well designed controlled trials are rare even for conventional medicines. Show me conventional medicine study that controls for both the placebo effect AND the natural course of the disease–and I’ll show you 99 others that control only for the placebo effect (if at all), and very poorly at that. And yet those treatments are pronounced to have “worked.”

  78. pmoran says:

    Helenscan: “Show me conventional medicine study that controls for both the placebo effect AND the natural course of the disease–and I’ll show you 99 others that control only for the placebo effect (if at all), and very poorly at that. ”

    I don’t understand. EVERY double-blind placebo controlled trial controls for both placebo responses and the natural course of the illness. Such influences should be equal in each arm of the study and cancel each other out, leaving only any intrinsic effect of the treatment that is being tested.

    What kind of error do you claim is prevalent within these studies?

    If you really mean that such studies cannot tell us anything about the strength and frequency of placebo responses under the conditions of the study, then I am with you.

  79. weing says:

    The study you refer to on your website is an epidemiologic study. The shortcomings are obvious when you compare it to a double blind placebo controlled trial. Unfortunately for your desires, parents are not too willing to engage their children in such trials and you are left with an imperfect study. I agree, that it is rare to find a jewel of a study in a medical journal but finding even a study of the type you criticize on your website in the CAM literature is like finding a palm tree growing in Antarctica.

  80. pmoran says:

    Helenscan:

    “Maybe that is why I keep harping about effects.

    1) Why take away an alternative for the patient that can possibly relieve suffering–just because it doesn’t fit well into the current body of knowledge?

    2) And to bring it back to Wondering’s point, why not acknowledge the positive effects when they do show up, objectively and impartially, as you would even if exactly the same thing happened with conventional medicine?

    3) Why not allow that while it is currently implausible, it might not always be implausible as science advances in the future?

    4) Why must “implausibility” trump the observation of effect, and automatically invalidate Mr. X’s relief and the entire CAM discipline”

    PM> We are at the nub of it. The (presumed) best illustration you could think of for a CAM “effect” was a patient whose attacks of gallbladder pain stopped after many years and after taking a homeopathic remedy.

    As a one-time gallbladder surgeon, I can offer several reasons why the attacks stopped when they did.

    Gallstones are naturally erratic in their behaviour and this man might have another attack tomorrow.

    The stones may have grown too big to travel and get impacted in the cystic duct or bile duct. The gallbladder may have become fibrosed and thickened from chronic inflammation and unable to contract. Rarely small stones may empty out, and he may not yet have grown a new crop — even this rare event is more likeley than that any one of several required processes of homeopathy are true. He is probably now very afraid of greasy foods and avoiding those may be contributing to fewer attacks.

    Implausibility is not usually merely a matter of incompatibility with “existing paradigms”, there are nearly always other more likely explanations for the effects being claimed — often placebo responses, but in this case natural variation in the progress of the condition.

    In Harriet’s recent Powerpoint presentation on the mechanics of science she included an interesting summary of the factors that need to be considered when making causal judgments. I can’t locate it on the blog at the moment, but she may post a link herself.

  81. Harriet Hall says:

    HelenSan said “Mr. X is pain-free for 3 years now. Even if that is a placebo effect, why deprive him from it? Why deprive a patient from the option of a cheap, relatively risk-free trial to be relieved of suffering?”

    We have no way of knowing whether Mr. X was pain-free because of a placebo effect from the treatment or because he would have been pain-free anyway if he had had no treatment.

    I have no objection to patients trying any treatment, however absurd. What I object to is providers lying to them, telling them something is effective when they have no good evidence that it is.

  82. HelenSan says:

    A. Noyd: “Before I reply, I would request you answer this question, which you left out of your reply: May I ask if you think any medicine or treatment has an actual effect beyond placebo (whether SBM or CAM)? If so, please give a few examples.”

    I’m sorry I overlooked that.

    There are plenty of drugs and CAM both that I “believe,” if you will, to have an effect beyond placebo. I have personally experienced some of these effects, from aspirin for a headache to slippery elm for a sore throat. I can only assume that millions of consumers of drugs like insulin or epidurals or CAM like herbs and acupuncture find them more effective than placebo as well.

    Now have I seen scientific proof that any of these drugs or CAM are more effective than placebo? No. The inherent flaws are great enough that I can’t be certain of the positive result they report any more than you can be certain of the positive results you read in CAM journals.

    First, they are often poorly written, so that the reader has insufficient information to judge the validity of the study. Maybe it really was a good study, but we can’t tell from the paper. For just one example, many papers do not operationally define their placebos. What exactly were those patients getting in their IVs or pills? Salt, sugar, tap water? Much of medical writing assumes the reader must trust the word of the researcher without any clear definitions. Some trials actually use pharmacologically active “placebos”!

    Second, they often report a lot of “adjusted” statistics and almost never a straight summary of the raw findings. Again, the reader must trust that the researcher adjusted everything correctly, without any details on what adjustments were made or how to check up on those adjustments. Now this take-my-work-for-it vagueness appears to be standard in medical writings, but if you read papers in chemistry for example, they give you enough information (like what the raw findings were before adjustment) to independently verify their adjustments, if any.

    Third, the comparisons between study and control group often lack controls for, or even discussion of, significant confounders. Sure they usually match up age, sex, race, and important disease variables. But most of the time, they overlook variables such as socioeconomic status, marital status, family support, stressors, health insurance, etc. Randomization is supposed to take care of variables one doesn’t think of, but these variables should have been thought of. Some of these variables are probably more significant than either sex or race for many diseases.

    I can go on, but you get the point. Last, but not least, the study is rarely replicated. Sure, they do other studies on the same drug, but hardly ever using the same methodology or design. Replication of an exact methodology/design is an essential part of scientific methodology. But in medicine, one positive result often gets celebrity status and is hailed as “scientific” proof of effectiveness, and that’s it. Everyone then references that one study as cold, hard evidence. Real science, of course, is actually not so hasty.

    So, even if I see a well-controlled RCT, I would want to see it replicated several times exactly before I feel it met criteria to be called “scientific” proof. You would expect the same out of CAM research.

    I hope that answers your question.

  83. HelenSan says:

    PMoran: “I don’t understand. EVERY double-blind placebo controlled trial controls for both placebo responses and the natural course of the illness. Such influences should be equal in each arm of the study and cancel each other out, leaving only any intrinsic effect of the treatment that is being tested.”

    Well, a placebo controlled study controls only for the placebo effect. It doesn’t control for what the course of the disease would look like had it not received any intervention at all. A good RCT should have a control group that comes in for the baseline measures and doesn’t come in again until it’s time for the end-point measures. THAT would tell you what the natural progression of the disease is supposed to look like.

    I am saying you can’t assume that the placebo group is the same thing as a do-nothing group.

    “What kind of error do you claim is prevalent within these studies?”

    See my last response to A. Noyd.

    “If you really mean that such studies cannot tell us anything about the strength and frequency of placebo responses under the conditions of the study, then I am with you.”

    Yes, that too. Without a do-nothing baseline group, one has no idea what the placebo effect is, if it is even there.

  84. HelenSan says:

    PMoran and others,

    Just to be clear, I was not offering the anecdote of Mr. X as absolute proof of causation or effectiveness. Of course, it could be complete coincidence with the natural progression of the disease, or the placebo effect, or whatever.

    However, I don’t think the observation of a close temporal association between intervention and outcome should automatically be dismissed in favor of more mainstream explanations. It *could* be coincidence or placebo. But assuming that it absolutely *has to be* is another thing entirely. Jumping to the conclusion that the association cannot possibly be reflective of a true effect–well, that is not scientific.

    In my view, science would not be hasty to dismiss and invalidate observations just because they don’t fit with the current paradigm. Science would be open to pursuing further research to see if those observations can be replicated, and then to systematically test hypotheses regarding those observations. How many Mr. X’s are out there? Let’s run a well-controlled RCT on this. Let’s replicate that RCT. Let’s hypothesize about possible mechanisms and test those hypotheses. That is how science works–not “Pffft. Another useless coincidence and superstitious patient to make fun of.”

    And while science is painfully sorting it all out, I don’t see a problem with patients trying CAM out to see if something might “work” (be temporally associated with a good outcome) for them and relieve their suffering.

  85. pmoran says:

    “Well, a placebo controlled study controls only for the placebo effect.”

    Not in the least correct. The apparent “response” rate in the placebo control group of an RCT is the sum of a number of influences, at least these four —

    1, True placebo responses (if these even exist under the conditions of the trial — in many clinical contexts they will be zero or negligible.).

    2. Spontaneous improvements — would have happened anyway.

    3. Biased reporting — subjects trying to give the nice doctors the right answer.

    4. Other measures the patients may have adopted of their own accord.

    If the study is designed and performed properly then these influences will cancel themselves out,

    Helenscan: “– A good RCT should have a control group that comes in for the baseline measures and doesn’t come in again until it’s time for the end-point measures. THAT would tell you what the natural progression of the disease is supposed to look like.”

    Not necessarily true. There will be a nocebo effect upon these people if they feel they are being denied treatment. Others may respond to the inevitable reassurance of being diagnosed, reassured and under medical observation.

    It is more difficult than you think to get an entirely uninfluenced group of patients, when dealing with subjective complaints.

    I think you are expecting too much of the most commmon kind of clinical study. It is designed to answer one single important question very precisely i.e. “does this treatment possess intrinsic efficacy?”.

    If you want to answer other questions relating to the natural history of illness or placebo responsiveness or its effectiveness in average medical practice then certainly a different study design will be needed.

    You seem to be criticising research for not answering questions it never attempted to answer.

    .

  86. pmoran says:

    Helenscan “And while science is painfully sorting it all out, I don’t see a problem with patients trying CAM out to see if something might “work” (be temporally associated with a good outcome) for them and relieve their suffering.”

    Within limits I agree. But not about Mr X. I wish him luck, but he is 99.9% certain to still have gallstones and severe gallbladder pathology. He is at risk of serious illness at any time. If he has gone for three years without symptoms then I would be loathe to urge surgery upon him but he should know he is walking a tightrope.

    I also cannot agree to extensive further research into homeopathy. It may have its place as simple safe placebo in societies that are accustomed to it. But it has had enough chances to show that it is anything more than placebo.

  87. A. Noyd says:

    I don’t think you’ve adequately answered two of my questions. I’ll try to explain why, as well as briefly point out where you abuse relativism.

    HelenSan: “I apologize if anything I said led you to believe that I subscribe to any type of relativism. In fact, it is very much the opposite.”

    Getting rid of the babble about “different paradigms” would be a good start, then. And don’t flip between “plausible” and “popular” with the excuse that “And what is considered ‘plausible’ and ‘valid’ is a function of its popularity.” If we have both agree that we should use science to validate claims of effectiveness, then it ought to be understood we’re using science as a standard for plausibility. If you feel science cannot set a useful standard in this way and that the body of knowledge so far discovered can’t validly speak for the plausibility of things like homeopathy, then what are we using science for at all?

    Here is more relativism and more context shifting in your reply to pmoran: “Your definition of plausibility is more palatable to me. However, I still insist, to the patient, it is still irrelevant.” Was pmoran arguing about plausibility in the minds of patients? No. The context was testing and overall plausibility with regards to “all the relevant evidence.”

    “I am saying proof of effectiveness is not equal to proof of causation. Two different things.
    Unless the RCT shows an near 100% effect in the study group, you can’t infer causation.”

    This makes no sense at all. Proof of effectiveness requires causation, even if the mechanism is not what was expected or is not known. If X has effect Y, then it causes effect Y; it’s not just that X was correlated with effect Y. So when I talk about effectiveness, I am talking about causation. If an RCT failed to show causation then it also failed to prove effect.

    “Correlation does not mean causation, but it does not mean massive coincidence either.”

    Which doesn’t answer my question: Are you trying to ask if I think the beyond-a-placebo effects that show up in RTCs could be some sort of massive coincidence? If you’re not trying to ask this, what do you mean by “statistically significant temporal correlation between intervention and desired outcome” other than “coincidence”? Try giving a straight answer instead of being coy about correlation vs. causation. (This would be the first question.)

    “I have personally experienced some of these effects, from aspirin for a headache to slippery elm for a sore throat.”

    But you seem willing to dispute the causation on the part of aspirin. Do you or do you not think that aspirin had a causative effect beyond placebo on the relief of your headache?

    “I can only assume that millions of consumers of drugs like insulin or epidurals or CAM like herbs and acupuncture find them more effective than placebo as well.”

    I’m not asking about what you think seems to have an effect for other people in their own minds, I’m asking about what you think actually does have an effect (beyond placebo).

    “I hope that answers your question.”

    No, it really doesn’t. I want to know if you think any medications or treatments, whether CAM or SBM, have a causative effect (however redundant that seems to me, you apparently require it) beyond placebo. Not whether there are any that you “believe” have had an effect because that sort of answer leaves you weasel room under relativism where you could claim it wouldn’t have an effect for someone else operating under a “different paradigm” so it can’t truly be said to have an effect after all. Do you think there is any medicine or treatment that shows a causative effect beyond placebo where no one has jumped to conclusions? The proper answers are “yes,” “no,” or “I don’t know.” And then examples if the answer is yes. (This would be the second question.)

  88. HelenSan says:

    A. Noyd: Getting rid of the babble about “different paradigms” would be a good start, then.

    I believe in an absolute truth. Reality exists independently of the human mind and objectively, yes. I also believe that we do not absolutely know that truth and reality right now. As we approach that absolute truth using the scientific method, we go through different paradigms. That is a fact in the history of science. It is not relativism. Medicine used to operate under religious and philosophical ones. Currently, it embraces the biochemical paradigm. Who knows what paradigm will come next that will reveal our current one to be inaccurate?

    So what you THINK is plausible now, may be completely wrong. Plausibility is an opinion. Human knowledge is too incomplete to decide such opinions can be free from error.

    “If X has effect Y, then it causes effect Y; it’s not just that X was correlated with effect Y. ”

    I defined “effectiveness” differently from the outset as a correlation. Nothing more. The RCTs are so poorly controlled they cannot demonstrate causation, only correlation. So I interpret all medical RCT results as having only the confidence and validity of a correlation. You think it shows causation because you believe the research design is valid. I don’t.

    “Try giving a straight answer instead of being coy about correlation vs. causation. ”

    You say coy. I say precise.

    “But you seem willing to dispute the causation on the part of aspirin. Do you or do you not think that aspirin had a causative effect beyond placebo on the relief of your headache?”

    I think it. I have seen no scientific proof of it. I think many things for which I have no scientific proof, you see. I think this one particular restaurant gave me food poisoning. But I will never have good, hard proof of it. I think reading to my kids early on gave them a love of books. But do I have scientific proof of that? No.

    “I want to know if you think any medications or treatments, whether CAM or SBM, have a causative effect (however redundant that seems to me, you apparently require it) beyond placebo. ”

    Whatever I think, I have seen no scientific proof of a causative effect beyond placebo in the extensive medical literature I have sampled.

    “Do you think there is any medicine or treatment that shows a causative effect beyond placebo where no one has jumped to conclusions? The proper answers are “yes,” “no,” or “I don’t know.””

    NO. Absolutely not. None of the RCTs I have encountered in my readings have sufficient scientific rigor to demonstrate causation.

  89. HelenSan says:

    Pmoran: “If the study is designed and performed properly then these influences will cancel themselves out.”

    I think you have identified one of our most fundamental differences on what constitutes scientific rigor. I see no methodology for these influences to “cancel” anything out. In scientific method, these influences are called confounders, and it is the researcher’s responsibility to design controls for ALL confounders. Confounders don’t just magically cancel themselves out.

    It is unique to medical research to rely on randomization, large sample sizes, and whatever magic mechanism you have identified to “cancel” out confounders–rather than tediously design controls for each and every one of them. That is what scientists in other disciplines do. They don’t have magic confounder cancellation elves.

    “There will be a nocebo effect upon these people if they feel they are being denied treatment.”

    Then control for a nocebo effect. But regardless, a no-intervention control would reflect natural progression of a disease more than a placebo group would.

    “It is more difficult than you think to get an entirely uninfluenced group of patients, when dealing with subjective complaints.”

    My point is precisely that it is more difficult than you think. Which is why you need more than one control group. The confounders are too numerous to name, so at least control for the ones you can think of. Just because it is so difficult doesn’t mean it is ok to ignore all confounders but one.

    “I think you are expecting too much of the most commmon kind of clinical study. It is designed to answer one single important question very precisely i.e. “does this treatment possess intrinsic efficacy?”.”

    And I am saying without controlling for all the confounders that influence the measured outcome, we have no idea what the intrinsic efficacy is. For all we know, the differences between the study group and the placebo group could be because the placebo group was poorer, had less family support, and didn’t exercise. Now you can claim they weren’t poorer all you want, but you have no PROOF of it because it wasn’t even addressed or measured! A poorly controlled RCT relies solely on wishful thinking that both groups are exactly the same in every way that could influence the efficacy results.

    “If you want to answer other questions relating to the natural history of illness or placebo responsiveness or its effectiveness in average medical practice then certainly a different study design will be needed.”

    Then we agree at last. A different study design is needed! Thank you!!!!

    What was the first thing you guys all pointed out when I gave the example of Mr. X? How do we know it wasn’t placebo or a natural progression? You can’t make those questions essential for determining efficacy of CAM, but optional for determining efficacy of a study drug. Well, you can, but science wouldn’t approve of the double standard and preferential treatment and absence of objectivity.

    “You seem to be criticising research for not answering questions it never attempted to answer. ”

    My criticism is precisely that they never attempted to answer those questions–especially when they demand those same answers from CAM.

    Answering these questions (and more) is essential to show that the efficacy can be attributed to the study drug, rather than to placebo or natural progression. Without those answers, without the proper controls for all the major and obvious confounders, you have no proof of efficacy.

    Medicine practices what I call “cookbook science.” You follow the recipe, put in the ingredients of randomization and placebo, pop it in the oven to bake, and out comes “Scientific Proof of Efficacy!” Medical schools do medical researchers a great disservice by not teaching proper scientific method, which is not a recipe but an understanding applied individually and carefully to every research question:

    “There is no science without control.”

    And the corollary…

    “The less control you have, the less science you have.”

  90. weing says:

    So let me see what you are saying. My simple mind has trouble dealing with abstractions. If you take 1000 Mr Xs that meet the criteria for surgery and in half you remove the gallbladder and to the other half you give your favorite herb or homeopathic potion and wait, let’s say 10 years, and see how many die in each group of gallstone related complications or have hospitalizations for such. If you had statistically significant differences between these groups, that would not be scientific proof because some of the patients that didn’t have surgery initially, had emergency surgery later on?

  91. weing says:

    “How do we know it wasn’t placebo or a natural progression? You can’t make those questions essential for determining efficacy of CAM, but optional for determining efficacy of a study drug.”
    Can you give an example of a study drug for which these questions are not essential?

  92. weing says:

    Confounders tend to ruin studies showing perhaps a lower efficacy of a study drug. An example would be the FIELD trial. It was hoped to show efficacy of fenofibrate, but it couldn’t control for the increased use of statins, that became the standard of care during the study period. The results were a dud, making the trial worthless in my opinion.

  93. pmoran says:

    Helenscan. I know what a confounder is, and it is something that applies when trying to derive causal associations from observational studies and anecdotal data.

    The whole raison d’etre, the fundamental logic of of the modern placebo-controlled RCT is to “control for” known, and even unknown, confounders by ensuring that they will have equal influence in both groups, if the study is large enough and the subjects are properly randomised. This is what I meant by them “cancelling out”.

    Do you not yet understand that?

  94. HelenSan says:

    Pmoran: “The whole raison d’etre, the fundamental logic of of the modern placebo-controlled RCT is to “control for” known, and even unknown, confounders by ensuring that they will have equal influence in both groups, if the study is large enough and the subjects are properly randomised. This is what I meant by them “cancelling out”.”

    Medicine practices a false reliance of large sample sizes and randomization as inadequate substitutes for true controls. It’s lazy pseudoscience. And no, they do not allow confounders to “cancel out.” That is a lie taught only in medical schools and in no other scientific discipline.

  95. HelenSan says:

    “Can you give an example of a study drug for which these questions are not essential?”

    Any study that has no do-nothing control group cannot answer the question about natural progression of disease.

  96. HelenSan says:

    “If you had statistically significant differences between these groups, that would not be scientific proof because some of the patients that didn’t have surgery initially, had emergency surgery later on?”

    Maybe the group that got CAM had to pay out of pocket because their health insurance wouldn’t pick up CAM. Their finances suffered and their spouses left them. They got depressed and started drinking. They lost their jobs, and all that stress caused decompensations, which resulted in hospitalizations and deaths for gall bladder disorders.

    A lot of things can happen in 10 years that influence the outcome. Without controlling for them in the study design, the study cannot scientifically attribute the outcome to the independent variable. A large sample size and randomization can *help* control for selection bias at the beginning, but they cannot control for many other confounders that influence the outcome during the course of the study.

  97. pmoran says:

    “Medicine practices a false reliance of large sample sizes and randomization as inadequate substitutes for true controls. It’s lazy pseudoscience. And no, they do not allow confounders to “cancel out.” That is a lie taught only in medical schools and in no other scientific discipline”

    What nonsense!

    Do you really think laboratory scientists know every possible influence they are controlling for when they use a reagent blank in their test -tube experiments? This is the study design that clinical trials try to mimic.

    Do lab scientists measure the day’s temperature, and reagent strength, and instrument sensitivities and try to “control” in some bizarre way for each one individually in every test run?

    No. You have a lot to learn but are not listening.

  98. Joe says:

    pmoran on 02 Oct 2009 at 1:49 am “You have a lot to learn but are not listening.”

    Yes, perhaps HelenSan should get a dictionary and look-up “sophomoric.”

    Or, as someone I know likes to say “If you can’t understand; maybe it’s you: http://www.apa.org/journals/features/psp7761121.pdf
    The article is titled Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments

  99. HelenSan says:

    Pmoran: “Do lab scientists measure the day’s temperature, and reagent strength, and instrument sensitivities and try to “control” in some bizarre way for each one individually in every test run?”

    If the day’s temperature, reagent strength, and instrument sensitivity can confound (influence) the dependent variable (the measured outcome), YES–they design controls for those confounders as best as they can. If they cannot do it all in one experiment, they do it in successive experiments. Then in the later papers, they cite the controls found in previous experiments.

    What they do not do is say “I will let my magic confounder cancellation elves, Big Sample and Random, take care of the confounders. They do such a good job I don’t even have to measure the confounders and make sure they are really gone!”

    “No. You have a lot to learn but are not listening.”

    Well, of course, I could say the same of you.

    I am married to a PhD physical chemist who works as a research scientist for a national laboratory. We talk about scientific methodology and design all the time. When I read some of the responses on this thread to him, he rolled his eyes and said, “Why do you talk to these people? They don’t know what science is, and they are not going to start learning from you.”

    I suppose we just have to agree to disagree about what constitutes science.

    I’ll leave you with this article, the speech where Richard Feynman introduced the term, “cargo cult science.”

    http://calteches.library.caltech.edu/51/2/CargoCult.pdf

    Here are some noteworthy quotations:
    1) “Nothing happened. So I was unable to investigate that phenomenon.”

    Notice that when Uri Geller was unable to bend his spoon, Feynman doesn’t jump to the conclusion, ‘Aha! It’s all fake!” He concludes more precisely that he was unable to investigate it. He understands something you guys often forget, that absence of evidence does NOT equal evidence of absence.

    2) “For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.”

    He is, of course, talking about what we call confounders, “other causes that could possibly explain your results.” You need to report them, report the controls you are relying on in previous experiments, etc–”to make sure the other fellow can tell they have been eliminated.”

    Well, I’m the other fellow. When I read an RCT, I see no evidence that the confounders have been eliminated. Therefore I see no evidence to conclude efficacy or any interpretation that goes beyond the confidence of a correlational study.

    You see, if I ask the question, “What if some placebo patients started using recreational drugs in the middle of the study, and those drugs were what caused a poorer outcome?”

    You might say, “Well, I am sure some study patients started using recreational drugs as well. Because large samples and randomization makes sure both groups are equally likely to start using drugs in the middle of the trial.”

    I say, “Well, that would be convenient if it were true. Do you have evidence that recreational drugs were used or not used in equal numbers in both groups? Did you at least ask them at the end of the study, so you can say, ‘both groups reported the same level of recreational drug use.’?”

    You might say, “Oh no, I don’t need to ask. Everyone who is not ignorant just KNOWS large sample sizes and randomization eliminates the need for controls for every little possible confounder and evidence of controls.”

    After reading the article, what do you think Dr. Feynman would say to that?

Comments are closed.