Articles

A Pair of Acupuncture Studies

Two recent acupuncture studies have received some media attention, both purporting to show positive effects. Both studies are also not clinical efficacy trials, so cannot be used to support any claims for efficacy for acupuncture – although that is how they are often being presented in the media.

These and other studies show the dire need for more trained science journalists, or science blogging – they only make sense when put into a proper context. No media coverage I read bothered to do this.

The first study comes out of South Korea and involves using acupuncture in a rat model of spinal cord injury. The researchers used a standard method of inducing spinal cord injury in rats, and compared various acupuncture locations to no-acupuncture control. They followed a series of metabolic outcomes, as well as the extent of spinal cord injury and functional recovery. They conclude:

Thus, our results suggest that the neuroprotection by acupuncture may be partly mediated via inhibition of inflammation and microglial activation after SCI and acupuncture can be used as a potential therapeutic tool for treating acute spinal injury in human.

The notion that acupuncture will actually improve outcome after acute spinal cord injury is, of course, extraordinary. This goes far beyond a subjective decrease in pain or some other symptomatic benefit. Therefore similarly extraordinary evidence should be required to support such a claim – and this study does not provide that.

In reading through the details of the study several factors caught my attention. The first is that there is no indication that the researchers were blinded. This alone calls the results into serious question. It is all too easy for researchers to allow personal bias to affect study results, even when they seem quantitative. We need look no further than the homeopathy research of Jacques Benveniste to see this (initial impressive results were investigated by Nature and found to be the result, charitably, if inadequate blinding).

Further, the researchers looked at several acupuncture points and then chose the ones that seemed to have an effect. This allowed for retrospective cherry picking – it is possible, in other words, that they received a scattering of random effects and chose the ones that appeared positive.

The effect sizes themselves, while statistically significant, were not clinically impressive. If they were real they would be useful in the treatment of spinal cord injury, but that is the point. Such small effect sizes are easily the product of randomness or bias.

And finally it should be noted that the study is coming out of South Korea. It is well established that countries where acupuncture is culturally important tend to have a much higher positive outcome rate than the same research in Western countries. The motivation to prove acupuncture seems to be a significant bias. Similarly, we recognize that there is a bias in favor of efficacy for pharmaceutical company sponsored research – the principle is the same. The bias of the researchers, even when well-controlled on paper, is measurable.

The bottom line with this study is that it provides weak evidence for a very extraordinary claim. It is of no practical use unless and until it is independently replicated with proper blinding. If you believe what you read in the media, however, you would be led to the conclusion that spinal injured patients could be made to walk again simply by sticking needles into magical locations on their body.

The second study uses quantitative sensory testing (QST) to look at pain threshold at baseline and after acupuncture and “electroacupuncture”. They conclude:

There were congruent changes on QST after 3 common acupuncture stimulation methods, with possible unilateral as well as bilateral effects.

In other words – acupuncture decreases the perception of pain. This small study suffers from the same primary problem as the other – it is described as only single-blinded. The subjects themselves were not blinded to whether or not they were getting “real” acupuncture vs a sham or placebo. The totality of prior acupuncture research has clearly demonstrated that such unblinded studies are all but useless. There is a significant placebo effect from getting poked with needles, and this is sufficient to explain the results of this study.

While QST is quantitative, it is still subjective. In fact, using QST has fallen a bit out of favor in neurological studies because the elaborate procedure is no more reliable as an outcome measure than straightforward sensory testing. QST is still reliant on the subjective report of the subject.

Further, this study mixed acupuncture with “electroacupuncture.” I strongly maintain that there is no such thing as “electroacupunture” – it is, rather, the application of transcutaneous electrical stimulation through an acupuncture needle. This is no more acupuncture than the application of morphine through a hollow acupuncture needle should be considered acupuncture.

It is possible that needling and electrical stimulation do decrease subjective pain perception (although we can’t conclude that based upon this study). One pain or sensory stimulation can certainly distract you from another. There is also the principle of counter-irritation – the inhibition of pain pathways by activating parallel sensory pathways. Bang your elbow and you will rub it to decrease the sharp pain.

Conclusion

Given the state of the acupuncture literature, such small and insufficiently blinded studies are of little value. It has already been established that there is a significant placebo effect surrounding the ritual of acupuncture and there are mechanisms of non-specific effects, such as counter-irritation. None of this can be logically used to support the underlying assumptions of acupuncture – that there is anything special about the designated acupuncture points, or that they can be used to manipulate “chi” or some other mysterious energy.

We are already well past the stage of preliminary studies in acupuncture. Only rigorously controlled studies are of any use. And the term “electroacupuncture” causes only confusion and cannot be meaningfully used. It is the blurring of variables when good science should endeavor to isolate variables.

Also, in a perfect world, the general press would not report on every preliminary study as if they were a definitive medical breakthrough. Such medical news stories should be covered in more focused outlets that have the space and expertise to put the results into a reasonable context.

Posted in: Acupuncture

Leave a Comment (40) ↓

40 thoughts on “A Pair of Acupuncture Studies

  1. Draal says:

    The first is that there is no indication that the researchers were blinded. This alone calls the results into serious question.
    Steven, I understand what your saying. However, it got me thinking about how I conduct my own research. Please consider the following scenario.

    I often work with E. coli. When conducting an experiment, I’ll hold everything constant but change one variable. For example, I’m over expressing an enzyme in E. coli that can convert chemical A into chemical B. I’m interested in finding out if the supplementation of a cofactor will increase extracellular concentrations of chemical B after a 24 hour fermentation. I add the cofactor, dissolved in a carrier solvent, into bacth 1. I add the same volume of just carrier solvent to batch two. I know which one is which batch. After 24 hours, I analyze the cultures using the same method for both batches. Later I publish the results in a peer reviewed journal.

    Are my publish results automatically called into serious question? Is there a difference between how I do research and this study?

  2. Prior acupuncture studies have demonstrated the absolute need for double blinding and adequate controls in acupuncture studies.

    RE: the second study:

    Am I missing something, or was there not an actual control group here?

    “Each volunteer received all 3 acupuncture treatments tested.”

    I’ve gone through it several times to try to find the control group that I must be missing, and it’s not there. The results show the experimental groups, the baseline, and no control group.

    How is it possible to conclude anything from this study, even if it were fully blinded? There appears to be no control group to compare the experimental groups to.

    They seem to think the control group is the experimental group(s) before treatment.

    You can’t use your baseline as a control in an experiment like this.

    What an utterly useless, waste of time study.

  3. windriven says:

    Dr. Novella, it is clear that a wide swath of humanity is more interested in romantic notions of the efficacy of ancient nostrums than in the cold, hard realities of scientifically proven treatments. I despair of ever seeing the corner turned for those who choose fear and superstition over science and technology. Perhaps it is something hard-wired. I don’t believe I have ever met a magical thinker who has gone on to become a rational thinker. Magical thinking doesn’t seem to be well correlated with intelligence or education (although for obvious reasons fewer scientists seem to be magical thinkers).

    Forgive the long preamble. My basic question is: why bother? Aren’t we better off fighting the intrusion of woo into clinical medicine and medical education than in trying to trying to move the immovable? I understand that vaccination is a different issue because of herd immunity. But in the case of adults why not leave them to their woo? An old mentor of mine observed: “Never try to teach a pig to sing. It wastes your time and it annoys the pig.”

  4. Draal – is there a reason why you don’t blind your assessment of the two groups? This would strengthen your research, as it would minimize the effects of bias. What we can say is that,in retrospect, there are many examples of basic lab research, that seemed quantitative and objective, but was unblinded and turned out to be all confirmation bias in the end. Dozens of labs thought they detected N-rays, until the protocols were blinded, then “poof!”

    windriven- while I agree with you to a point, you are engaging in a bit of a false dichotomy. The world is not divided into rationalists and believers. If anything, these two groups are at the extremes of a bell curve, with the hump being somewhere in the middle.

    Blogs like this are optimal for the vast middle – people who can go either way, depending on the information they have access to. They are somewhat rational, but also willing to believe, and can be persuaded by information and explanation.

    We cannot touch the “true believer” – but actually I have plenty of counter examples of “converts” to skepticism. Even if they are the vast minority and an exception, they exist, and at least for those individuals the effect was profound.

    But also – we educate the scientific skeptics. I read my colleague’s posts to learn something.

  5. Todd W. says:

    Dr. Novella,

    This article is somewhat timely. Just today I discovered that there are three separate clinical trials examining acupuncture at Massachusetts General Hospital in Boston.

  6. windriven says:

    @ Dr. Novella

    “[t]hese two groups are at the extremes of a bell curve…”

    I hope you’re right but my personal experience suggests more of an inverted bell. In my experience there aren’t many rational thinkers who also believe in megadose vitamins or acupuncture. But as your colleague in ID gleefully observes, ‘in my experience’ are the three most dangerous words in medicine.

    I’d love to read about some of the converts to skepticism. It might make an interesting blog sometime when the scientific issue cupboard is otherwise a little bare. I’ve had a few but not enough kindle optimism.

  7. qetzal says:

    Draal,

    I’m a molecular biologist, so I understand exactly where you’re coming from. Blinding is quite uncommon in a typical mol bio experiment.

    The potential for bias depends in part on the subjectivity of the methods we use to measure an outcome. IIRC, the main outcome measure in the Benveniste studies was mast cell degranulation, which was being assessed visually, was quite subjective, and quite prone to observer bias. In contrast, someting like an ELISA or an analytical method to measure chemicals A & B will usually be much less subjective.

    However, we have to keep in mind that even if the method itself is relatively objective, there are still many opportunities for experimenter bias to alter the final conclusion. Depending on the robustness of the system, the outcome could still depend on things like how the experiment is set up, what order we add things, what results we reject as “obviously” due to some problem in the experiment, etc. If we know which samples “should be” positive, we may unconsciously set things up to favor the outcome we hope to see.

    Take your example. Suppose you set up three fermentations without cofactor and three with. You work them up and analyze them sequentially for A & B. Naturally, for convenience, you analyze them in order – first the 3 without, then the 3 with. At the end you calculate B/A and find that the ratio is significantly higher for the 3 with. You conclude the cofactor increased conversion of A to B.

    What you may not have realized is that A was breaking down over time as each sample was analyzed in succession, while B is relatively stable. Since you analyzed the samples with cofactor last, they had more time for A to break down, giving an artifically increase in B/A. But that doesn’t occur to you, because you expected B/A to increase with cofactor. Since the result matched your expectation, you were less likely to question its validity or search for other explanations. If instead, you had a colleague code three vials of cofactor and three of solvent, and give them to you in random order, the degradation effect would be less likely to cause a bias in the final data.

    Obviously, that’s a contrived situation. And as I already admitted, I don’t use blinding in my work very often either. But I think Dr. Novella is correct to point out that there may be more bias in our results than we think, and that a properly blinded and randomized study may help minimize that.

  8. Draal says:

    qetzal said, “What you may not have realized is that A was breaking down over time as each sample was analyzed in succession, while B is relatively stable.”

    In such cases, I have determined the stability of the compounds over time and whether or not they are metabolites of the bacteria or not. Double parallel and reference subtraction will help correct for these issues.

    Draal – is there a reason why you don’t blind your assessment of the two groups?
    What qetzal said. Blinding is uncommon in a wet lab. It’s not taught (if ever) as an integral part of such research. I have never come across an article where a chemical or protein is being tested in a biological or microbiological assays. I suppose it’s because it’s considered unnecessary. If the results are consistently replicable, it’s assumed there is an actual difference that is being observed.

  9. Draal says:

    Meant to say,

    I have never come across an article that used blinding when a chemical or protein is being tested in a biological or microbiological assays.

  10. Draal says:

    I should also say that multiple experiments that use different detection techniques are frequently used as well. If all the results support the hypothesis, the more believable it is. But still, no blinding.

  11. Maz says:

    I have also never blinded any of my basic-science experiments. I think one of the main problems is that each individual procedure is generally solitary — it only takes one person to do a western.

    In order to blind all of my individual experiments, I would need to pester my lab-mates a lot more often. Don’t get me wrong, I’m sure that I SHOULD be blinding my experiments more often — I just understand why it’s uncommon.

    The solution is likely to lie with the lab’s PI. If came to a new lab and was told it was lab policy to blind all experiments, I would have no problem doing it.

  12. mikerattlesnake says:

    @draal (and Dr. Novella)

    Although blinding would always be preferable, isn’t the important difference here the use of human subjects (and all the complexities that come along with analyzing their response to treatment)? Bacteria is relatively simple, so there are fewer confounding factors, no?

  13. mikerattlesnake says:

    whoops, should have said “human and animal” subjects.

  14. Draal says:

    @mikerattlesnake
    I was looking to see if Steven makes the same distinction. Is blinding necessary for all types of experiments or just ones with human subjects and animals? In other words, if the article does not mention blinding in the experimental section, is the research automatically called into serious question?

    I haven’t worked with animals so I don’t know. If it’s a behavior that is being observed, then, it’s obvious that blinding is needed (like the horse that could count but was really interpreting the trainer’s body language). But if it’s analyzing concentrations in the blood or urine, then, it doesn’t seem necessary to me. Sure, the samples can be screwed with, but if you’re hell bent on skewing the results, than blinding won’t make a difference, the data can just be fudged.

  15. David Gorski says:

    I often work with E. coli. When conducting an experiment, I’ll hold everything constant but change one variable. For example, I’m over expressing an enzyme in E. coli that can convert chemical A into chemical B. I’m interested in finding out if the supplementation of a cofactor will increase extracellular concentrations of chemical B after a 24 hour fermentation. I add the cofactor, dissolved in a carrier solvent, into bacth 1. I add the same volume of just carrier solvent to batch two. I know which one is which batch. After 24 hours, I analyze the cultures using the same method for both batches. Later I publish the results in a peer reviewed journal.

    Are my publish results automatically called into serious question? Is there a difference between how I do research and this study?

    That’s different. In this experiment, the investigators were doing measurements that are not entirely objective. Measuring a chemical level is fairly objective, depending on the assay. Immunohistochemical and immunofluorescent analyses such as these are not; they require judgment to identify structures and the intensity of staining of structures. The investigators counted various stained structures under the microscope at one point. At another point they measured the volume of the zone of necrosis due to the spinal cord injury. And they all did it unblinded.

    Because these analyses are not completely objective is why, whenever we do experiments that require pathologists to evaluate tissue or investigators to analyze tissue under the microscope, best practice is to blind the observer to the experimental group, because it is very easy for subtle biases to slip into the counting or the estimates. It’s also one reason why we now try so hard to use objective measurements, such as computer image analysis, to analyze histological parameters. If computer image analysis is not available, then blinding the investigator doing the counting and preferably having two or more different investigators doing the counting in order to verify low inter-observer variability are best practices.

    I do note, however, that it explicitly states in the methods that the investigators who looked at the rats’ behavior were blinded to experimental group.

  16. pmoran says:

    I cannot see any point to routine blinding of studies that use inanimate sensors to measure outcomes in non-sentient systems. It surely would make lab research unnecessarily complex and expensive.

    Of more concern are various post hoc ways of manipulating results so as to obtain statistical significance, some of which have been mentioned above. The replication of results by independent parties decides whether results are valid or not.

  17. DREads says:

    There are some advantages to research in computer science such as machine learning, robot vision, and mathematical modeling in that experimental controls are easier to automate with minimal cost. There are experimental setups that we commonly use that would be completely impractical for a physical wet-lab experiment such as cross-validation/rotation estimation. In addition, we can serialize part of an experiment and electronically send it to a third-party for final analysis on sequestered data. This would be hard to do with a wet lab experiment. Very often, analyses derived from one data sample do not generalize to others. Thus, one must be extremely careful when choosing what to measure after performing one piece of an experiment.

  18. BobbyG says:

    “But as your colleague in ID gleefully observes, ‘in my experience’ are the three most dangerous words in medicine.”
    ___

    One might also observe that “experience is that which you get just AFTER you really needed it.”

  19. David Gorski says:

    I cannot see any point to routine blinding of studies that use inanimate sensors to measure outcomes in non-sentient systems. It surely would make lab research unnecessarily complex and expensive.

    And no one here is advocating that.

  20. Versus says:

    windriven says:

    “it is clear that a wide swath of humanity is more interested in romantic notions of the efficacy of ancient nostrums than in the cold, hard realities of scientifically proven treatments.”

    That may be true, but some of these people are victims of the licensing of alternative practitioners by the (U.S.) states. You can hardly blame a person for going to a chiropractor when the state allows him to call himself “doctor” and gives him a broad scope of practice. Same for acupuncturists and NDs.

  21. DREads says:

    windriven says:
    “it is clear that a wide swath of humanity is more interested in romantic notions of the efficacy of ancient nostrums than in the cold, hard realities of scientifically proven treatments.”
    That may be true, but some of these people are victims of the licensing of alternative practitioners by the (U.S.) states. You can hardly blame a person for going to a chiropractor when the state allows him to call himself “doctor” and gives him a broad scope of practice. Same for acupuncturists and NDs.

    Unfortunately, government licensing doesn’t mean much, even an MD license. There are many quack MDs out there with a state license in good standing. Some states are better than others. Going to a chiropractor just because they have a license reflects credulity. It is a fallacy to assume the government licensure will always protect you.

  22. DREads says:

    I cannot see any point to routine blinding of studies that use inanimate sensors to measure outcomes in non-sentient systems. It surely would make lab research unnecessarily complex and expensive.

    And no one here is advocating that.

    Indeed, I agree. However, choosing a different variable to measure in the middle of an experiment is problematic, e.g. picking a different inanimate sensor because you don’t like the measurements coming out of the first one. Some sensors are complex and require significant human operation. What do we do about these?

    Bias resulting from a lack of blinding during an experiment can also come up after an experiment. When it comes to the post-experiment data analysis phase of a study, I’m a strong advocate for establishing the statistical analysis to perform before applying it to the data. If you choose it based on the data, your results can be severely biased. When it isn’t possible to establish the statistical methodology beforehand, the data should be divided into partitions and the final reported statistics should only come from data not used to choose the methodology. Statistical bootstrapping techniques are acceptable when properly used.

  23. nitpicking says:

    For all you lab guys, consider a highly objective value for which there was no obvious reason for bias: the charge on the electron. When Nobel laureate Robert Millikan first measured it, his number was off. We even know why: he had the wrong value for the viscosity of air.

    What makes this relevant is that for decades afterward EVERYONE ELSE measured the value of the electron’s charge as being higher than it really is. There is no suggestion of conscious fraud, it’s just that they couldn’t believe Millikan got it wrong, so they threw out “anomously” low values as obvious problems with the instrument, and similar errors. This sort of “herd mentality science” could certainly affect chemistry or microbiology.

  24. qetzal says:

    I agree with DREads @8:16pm and nitpicking above. Even when the measurement is objective, there are other opportunities for bias to influence the final result. The statistical treatment that gives you the expected outcome must obviously be the right one. Values that are ‘obviously’ too low are judged to be errors and get thrown out.

    Blinding & randomizing everything would help reduce such errors, but would be pretty impractical for most basic bench work. We all just have to a) do the best we can to control our own biases, and b) remember that important findings really, really need to be replicated by another lab before we accept them as real. (And as nitpicking notes, even that may not always be reliable.)

  25. criticalist says:

    Actually, whilst I agree with most of Steve’s analysis of this paper, I don’t think his comments on the lack of blinding are entirely corrrect.

    From page 4: “Behavioral analyses were performed by trained investigators who were blind as to the experimental conditions.”

    I haven’t looked at the other outcome measures in detail, but my impression is that these were quantitative measurements of PCR activity and the like, and so as mentioned above, blinding may not be quite as important.

  26. Ian says:

    The last point reminds of a comment from this article:
    http://www.theatlantic.com/national/archive/2010/05/google-and-the-news/56584/

    The guy who created Google News noted that its such a waste of resources to have a dozen journalists write a dozen articles about the same topic in almost the exact same fashion. Similarly the content of entire newspapers mirrors that of other newspapers. This is really easy to see on Google News where articles are all grouped together.

    So I’d say that “medical news stories should be covered in more focused outlets that have the space and expertise to put the results into a reasonable context” being done professionally isn’t an unrealistic outcome in the Internet age, and not just with medical news. Journalists won’t be able to make a living by just parroting each other as they do now. They will need to differentiate themselves and create more specialized beats.

    Hopefully this means science journalists who understand science.

  27. David Gorski says:

    I haven’t looked at the other outcome measures in detail, but my impression is that these were quantitative measurements of PCR activity and the like, and so as mentioned above, blinding may not be quite as important.

    Nope. As I explained earlier in the comment thread, several of the other measures were immunohistochemical and immunofluorescent detection of various cell types and antigens in tissue sections requiring a trained observer to count the stained cells and structures. These are exactly the sorts of measurements that are very prone to subtle biases resulting in systematic error when the counting is done manually, as it was done in this study. For one thing, you have to make a judgment whether something is stained “positive” or not, and for staining that is relatively weak that can often be a judgment call. For another thing, even though most counting protocols require five to ten random high-powered fields to be counted, there can easily be subtle bias in picking which high-powered fields to count.

    That’s why when these types of measurements are done manually, without good computer-aided image analysis it’s imperative that the observer be blinded. It’s also desirable, although not essential, that another person repeat the counting, in order to assess intraobserver variability in the measurements. It never ceases to amaze me that papers in which histology or immunohistochemistry is assessed manage to be accepted for publication when the observers doing the analysis are not blinded to experimental group.

  28. Geekoid says:

    In my lab we alway did double blind studies. Of course I was studying the effects of sudden visual impairment in mice.

    *rimshot*

    @windriven:

    I don’t think there is a such thing as a ‘conversion to skeptisim’. In fact I consider it in invalid question for a very simple reason. Everyone is a skeptic. it’s an innate part of being human.
    I think the question is: How many people practice critical thinking and understand the scientific method.

  29. criticalist says:

    These sorts of analytic techniques are definitely outside my area of expertise, so I would defer to better informed opinions. Reading through the methods section again, it is clear they do refer to “counting” of some outcome variables, and this of course should have been blinded.

    However, in some of the immunohistochemistry sections they also refer to analysing the results using some kind of software – “AlphaImager” I think. I had assumed that this is a way of obtaining unbiased objective results for some of these staining techniques. Is this not the case? – as I said, I ‘m not familiar with this field, so am genuinely curious.

  30. DREads says:

    However, in some of the immunohistochemistry sections they also refer to analysing the results using some kind of software – “AlphaImager” I think. I had assumed that this is a way of obtaining unbiased objective results for some of these staining techniques. Is this not the case? – as I said, I ‘m not familiar with this field, so am genuinely curious.

    It is true that automated imagery analysis techniques can improve objectivity but they can’t completely remove bias. Counting algorithms are parameterized, and these parameters are usually estimated using human-labeled data. If the humans aren’t blinded during the labeling process, the bias propagates to the estimated parameters. In addition, if an estimated counting model overfits, the quality of its counts may be poor.

    Estimating algorithms for counting and localization in images is one area of my research. I am interested in mathematically provable statistical guarantees of accuracy given different sets of assumptions.

  31. wertys says:

    The second of these studies is utterly flawed, as there is already quite a body of research showing that any electrical stimulation of the deep somatic tissues such as muscle and ligament activates descending inhibition pathways in the spinal cord and brainstem. Invoking acupuncture as an explanation fails a number of logical tests, but principally occam’s razor dictates that no further explanation is necessary if a phenomenon can be explained using existing knowledge. One can accept the study in it’s entirety and still reject acupuncture as a treatment modality as all they have done is unwittingly replicate other more conventional studies.

  32. DanaUllman says:

    I am intrigued that so many of the people above have noted that they conduct basic sciences research without a placebo control group…but that does not seem to stop them from spewing venom on any study, including animal and basic science trials, that doesn’t have a placebo group and that tests any type of “alternative” treatments.

    Thanx for this acknowledgement.

    Can anyone out there say “double standard”? Yeah, I didn’t think so…

  33. BillyJoe says:

    Dana,

    There are different ways to investigate different questions.

    A pharmaceutical drug requires a clincal trial involving random allocation, placebo control, double blinding, and sufficient number of patients (amongst other requirements).

    Show us why a homoeopathic remedy should be treated any differently to a pharmaceutical drug.

  34. DanaUllman says:

    Hey Billy,

    My point is that there are hundreds of basic sciences trials that simply show that various homeopathic potencies have biological action. When you folks say that such effects are “impossible,” it simply shows your state of denial (or your ignorance).

    A large number of these trials have been replicated:

    Endler PC, Thieves K, Reich C, Matthiessen P, Bonamin L, Scherr C, Baumgartner S. Repetitions of fundamental research models for homeopathically prepared dilutions beyond 10-23: a bibliometric study. Homeopathy, 2010; 99: 25-36.

    Do your homework…

  35. Wolfy says:

    Dana

    1. How your above comment to BillyJoe reinforce the point you were making regarding this “double standard?”

    2. Not every basic science (animal, chemical, biochemical) experiment needs a placebo control, per se. However, a positive control and a negative control run prospectively with the experimental are very useful for the purposes of comparison. Further, not every control is the “right” control for the hypothesis in question.

    3. I’m not sure I would call 107 studies of which “24 experimental models in basic research on high homeo- pathic potencies, which were repeatedly investigated” a particularly “large number of trials.”

  36. jsjohnson says:

    One might also observe that “experience is that which you get just AFTER you really needed it.”

    “Experience is a comb that life gives to men once they are bald.”♠

  37. squirrelelite says:

    @DanaUllman,

    You referred to “hundreds of basic sciences trials that simply show that various homeopathic potencies have biological action”.

    Have any of these trials demonstrated reliably distinguishing (greater than 90% accuracy for a large number of test cases) between a homeopathic medicine prepared at a potency greater than 12C (in other words a dilution greater than the Avogadro number for which there is probably not a single atom or molecule of the original curative substance left in the solution) and an identically prepared sample which did not originally include the curative substance?

    If so, what is the reference?

    A little while ago, I saw a homeopathic medicine which contained the same substance at three different dilutions or potencies. I find this curious. If the rules of standard chemistry apply (the effect is proportional to the concentration), then the lowest dilution/potency should overwhelm the tiny effect of the other two. If the rules of homeopathy apply, the highest dilution/lowest concentration has the strongest effect and should overwhelm the other two. In neither case does the middle concentration have a significant effect. So what is the purpose of including three different potencies of the same medicine?

  38. DREads says:

    Dana Ullman writes:

    I am intrigued that so many of the people above have noted that they conduct basic sciences research without a placebo control group…but that does not seem to stop them from spewing venom on any study, including animal and basic science trials, that doesn’t have a placebo group and that tests any type of “alternative” treatments.
    Thanx for this acknowledgement.
    Can anyone out there say “double standard”? Yeah, I didn’t think so…

    Dana,

    First, there is a distinction between a placebo, blinding, and a control group. You seem to be lumping terms together. One can have a control group with or without blinding and the control variable may or may not be a placebo.

    Second, the quality of a study is not black or white, it falls on a spectrum. Part of being a good scientist involves offering constructive criticism of your peers as well as accepting criticism of your own work to mutually improve each others’ science. Even the best and most carefully designed studies can be improved. Homeopathy falls on the other side of the spectrum. Homeopaths blatantly ignore basic physics, chemistry, and thermodynamics. That’s quite different than one scientist offering subtle criticism of another scientist’s work. Scientists offering each other healthy criticism while rejecting implausible ideas like homeopathy may be a double standard to you, but to me, it’s just sensible and time saving.

    Third, you are confusing “impossible” with highly unlikely. Science has enabled man to safely go to the moon and return back to Earth, ensures water is safely filtered from waste before it is reused, carries information blindingly fast fiber optic networks, has led to the creation of antibiotics to cure people from deadly infections, etc. To reject fundamental aspects of science in support of homeopathy is completely unproductive. I presume you fly. How would you explain why your airplane stays aloft and safely lands? Would you and your cadres of homeopaths be able to use your homeopathic “theories” to build a jet? Homeopathy is so implausible and contrary to our understanding of several hundreds of years of science that we can just label it as unlikely, waste-of-my-time b*llsh*t and move on. Impossible vs. highly unlikely, um, who cares? Life is too short to study magical, mystical “made up” stuff. I, like many other scientists, prefer to direct our efforts, money, and resources toward ideas that are at least somewhat likely.

  39. Pman says:

    Here’s a nice SCI acupuncture study:

    http://journals.lww.com/ajpmr/Abstract/2003/01000/Clinical_Trial_of_Acupuncture_for_Patients_with.4.aspx

    We are going to replicate it in Charlotte.

  40. Harriet Hall says:

    Pman,

    Instead of repeating what they did, how about doing a study with a more appropriate control group?

Comments are closed.