Articles

Belief in Echinacea

Note: The study discussed here has also been covered by Mark Crislip. I wrote this before his article was published, so please forgive any repetition. I approached it from a different angle; and anyway, if something is worth saying once it’s probably worth saying twice.

 

Is Echinacea effective for preventing and treating the common cold or is it just a placebo? My interpretation of the evidence is that Echinacea does little or nothing for the common cold. Initial reports were favorable, but were followed by four highly credible negative trials in major medical journals. A Cochrane systematic review was typically wishy-washy  The Natural Medicines Comprehensive Database rates it as only “possibly effective” commenting that

Clinical studies and meta-analyses show that taking some Echinacea preparations can modestly reduce cold symptom severity and duration, possibly by about 10% to 30%; however, this level of symptom reduction might not be clinically meaningful for some patients. Several other clinical studies found no benefit from Echinacea preparations for reducing cold symptoms in adults or children…

A review on the common cold in American Family Physician stated that Echinacea is not recommended as a treatment.

I have a friend who believes in Echinacea. She says for the last several years she has taken Echinacea at the first hint of a cold, and she hasn’t developed a single cold in all that time. I told her that if that was valid evidence that it worked, I had just as valid evidence that it didn’t. For the last several years I have been careful not to take Echinacea at the first hint of a cold, and I haven’t had a single cold in all that time either. So I could claim that not taking Echinacea is an effective cold preventive! I thought my “evidence” cancelled out hers; she said we would just have to agree to disagree.

A recent study looked at the effect of belief on response to Echinacea and dummy pills. “Placebo Effects and the Common Cold: A Randomized Controlled Trial” was published by Barrett et al. in the Annals of Family Medicine

A news report about the study said “the placebo effect reduced the duration of common colds.” And that the study “reflected the power of mind over body.” But that is not what the study showed.

Methods

719 subjects with colds were randomized into 4 groups:

  1. No pills
  2. Placebo, blinded (didn’t know whether they were getting placebo or Echinacea)
  3. Echinacea, blinded (didn’t know whether they were getting placebo or Echinacea)
  4. Echinacea open label (knew they were getting Echinacea)

How do we know the subjects had colds? We don’t, really.

  • They answered yes to either “Do you think you have a cold?” or “Do you think you are coming down with a cold?”
  • They scored at least two points on the Jackson scale (8 self-reported symptoms rated from 0 to 3 for severity).

How was improvement measured? Were they really improved? We don’t know.

  • Twice a day they answered “Do you think you still have a cold?”
  • They answered a questionnaire rating 19 cold symptoms and functional impairment.
  • Biomarkers of immune response and inflammation were measured from nasal wash on day 1 and day 3: interleukin 8 and neutrophil counts. Are these biomarkers a reliable way to measure objective improvement in colds? I don’t know. It doesn’t matter anyway, since they didn’t change significantly.

Belief in Echinacea was assessed by asking if they had ever taken Echinacea before and how effective they thought it was on a 100 point scale.

Blinding was adequate: on an exit interview, patients in groups 2 and 3 couldn’t tell which group they were in.

Results

Compared to those receiving no pill, those receiving any pill reported modest improvement regardless of the content of the pill. They reported that their illness was 0.16 to 0.69 days shorter and 8% to 17% less severe. But these results were not statistically significant! Rather than showing that placebos reduced the duration of common colds, it showed that they didn’t have any statistically significant effect.

Contrary to expectations, open label was not superior to blinded Echinacea. (“I know I’m getting it” was no better than “I might be getting it.”) I found that intriguing.

Changes in biomarkers were not statistically significant.

Interestingly, four of the 6 assessed side effects were reported most frequently in the no-pill group. Headache was reported by 62% of those without pills compared with less than 50% in the other 3 groups. Statistical analysis showed that this was not due to chance. Does this mean that not taking pills causes headaches? Or that Echinacea and dummy pills are effective headache remedies? 62% seems like an unusually high incidence of headaches; does that mean that people who enroll in studies are unusually susceptible to headaches? Even if it was “greater than chance,” I suspect that it just represents noise in the data.

The most interesting finding was that those who believed in Echinacea did better regardless of which pill group they were in.  Among the 120 participants who had rated Echinacea’s effectiveness as greater than 50 on a 100-pointscale, illness duration was 2.58 days shorter in those given Echinacea or placebo than in those who got no pill, and mean global severity score was 26% lower but not significantly different.

A Further Question

In his book Snake Oil Science, R. Barker Bausell analyzed research showing that those who believed they got real acupuncture reported more relief than those who believed they got sham acupuncture, regardless of which they actually got. I wondered if this would be true for the Echinacea study as well, if those who believed they got Echinacea reported more improvement than those who believed they didn’t, regardless of whether they actually did or didn’t. I wrote the lead author to ask that question, and he replied

that data hasn’t [sic] been properly analyzed or presented.  Given small subsample sizes, confidant [sic] conclusions would likely be impossible.

I asked if it would be possible for someone to go back and look at the data and he answered:

It would take many dozens of hours to adequately address the question, and I’m afraid that the sample size is too small.  And resources too limited. I am advising several post-docs and leading several papers and this one just doesn’t merit the attention.  If you want to come to Madison for a month and can write up the background and methods section for the paper beforehand, I could probably get someone to do the data analysis and join you as a co-author.

I’m still curious, but I don’t want to know that badly!

Conclusion

The significant new finding of this study was that belief in Echinacea was more important than the content of the pill, regardless of whether subjects received Echinacea or placebo. To be more precise, it showed that subjects who thought they had a cold and who thought Echinacea was effective and who got either Echinacea or a placebo and who either knew they were getting Echinacea or thought they might be getting Echinacea were more likely to think their cold was gone sooner than if they got no pills.

Otherwise, the study only confirms some things we already knew. Patients report more subjective improvement with any pill than with no pill, and with any intervention compared to no intervention. Administering a placebo elicits self-reports of improvement. Echinacea is no more effective than placebo. The placebo phenomenon in colds is subjective and of such small magnitude that it can be considered not clinically important.

The authors said

Overall, this trial could be interpreted either as an appropriately powered trial that failed to conclusively show placebo effects, or as a trial suggesting small but perhaps meaningful effects related to expectation and pill-allocation.

Then they misrepresented their findings in the abstract when they said

Participants randomized to the no-pill group tended to have longer and more severe illnesses than those who received pills.

Yes, they “tended” to have (more correctly, to “report”) longer and more severe illnesses, but the tendency was not statistically significant. Why didn’t they follow the usual convention for scientific articles and say there was no significant difference between the groups?

No matter how you look at it, the news report was wrong:  the study is interesting, but it didn’t show that the placebo effect reduced the duration of common colds and it didn’t show the power of mind over body. It did show the power of mind to put a spin on study findings.

Posted in: Clinical Trials, Herbs & Supplements, Science and Medicine

Leave a Comment (30) ↓

30 thoughts on “Belief in Echinacea

  1. daijiyobu says:

    When I drive to work this morning, and the median has been planted by the city with purple cone-flower and black-eyed Susan,

    I know I’ll be thinking about just how damn disappointing echinacea has been for us all these years compared to commercial claims.

    And only getting moreso.

    Though I doubt sales will be dented all that much.

    The meme is planted [ouch!].

    -r.c.

  2. nybgrus says:

    This piece is actually rather apropos. Tomorrow morning the first year students have a lecture from a PharmD on CAM. It is a credulous lecture and uses NaturalStandard (the one and same source Dr. Oz pulled on Dr. Novella) as a reference. And it has a section on glucosamine, claiming efficacy (which you commented on previously, Dr. Hall) as well as feverfew for migraines (claiming it works despite the fact that even Cochrane says it doesn’t), and citing the Cochrane review that “Echinacea purpurea aerial parts seem to be effective for early treatment of colds.”

    I wrote up a debunk with evidence and sources for her entire lecture and sent it to my 1st year counterpart, but I am considering attending the lecture myself. My question to you, Dr. Hall (and anyone else who might have an opinion), do you think it would be worthwhile to be a rabble rouser and question her evidence and conclusions during the lecture (in front of a couple hundred med students)? Or would that simply cause more problems than it is worth? Essentially, I’m trying to find ways to be a thorn in the side of the IM/CAM credulous side of my medical school without committing professional suicide (though I reckon I have some leeway since I have a very solid reputation with the top faculty here, but I’m worried that may just be more rope to hang myself with).

    Any thoughts anyone?

  3. BillyJoe says:

    I suggest you calmly ask factually based questions that make it obvious to the audience that she is talking absolute nonsense (without throwing that fact in her face).
    And then try not to feel sorry for her.

  4. woo-fu says:

    My 2 cents*:

    minimize confrontation–rather, respectfully seek explanation

    pick the most relevant, illustrative points

    ask questions based directly on the evidence
    (“How would you interpret the results of ____ study?”)

    make any comments elegant (short & sweet)

    farm out other questions to other students; etc.

    All I could think of at the moment, but I’m sure you probably knew this already. *2 cents just doesn’t get very much anymore.

  5. woo-fu says:

    @nybgrus (meant to direct the comment above to you)

    Everyone has the right to question. To do so calmly and reasonably only proves your dedication to knowledge and the pursuit of science. In short, this could set a good example for other students.

  6. cervantes says:

    “Tendency” or “tended to” is the accepted language for a result that goes in an expected direction but fails to reach the — arbitrary — .05 level of “statistical significance.” “No significant association” does not mean “no association,” it means you haven’t proved it to an — arbitrary — standard. So that criticism is misdirected.

    That said, it certainly doesn’t look like echinacea is useful.

  7. @cervantes:

    I always thought “statistically significant” meant that the change is not different enough to be confidently considered a result of the intervention, and possible was due to chance or natural variations in the individuals or other uncontrollable factors.

    People seem to point out that the p value used in any given study is arbitrary when the results of the study conflict with their beliefs, and don’t mention that it’s arbitrary when they agree with the outcome of the study. I am not sure if that’s what you were doing because I can’t tell your opinion on echinacea, but your response made me think…

    If the p value they are using is .05, then why not check the data against other p values? Say you do that. It would still be arbitrary, because you could just pick and choose which p value you wanted to use that made your results look significant. If that was an acceptable practice, then there would be no standardization to prevent people from messing with the statistics to show a statistically significant outcome when they wanted to have one.

    My question is, what does it matter if the p value of 0.05 is arbitrary? It seems like statistics can be abused pretty heavily when given the chance, so picking some arbitrary standard number still seems like the best option.

  8. rork says:

    .05 is a commonly used cutoff for p. It is not the p-value itself.
    You are supposed to explain the statistical test, and give the sample size and p-value and estimate the effect size, at minimum.
    Hall’s complaint is dead-on.

  9. Scott says:

    I’ve never been a fan of mentioning “tendencies” in abstracts. By definition you aren’t confident that they’re real, so they should not be presented as headline results. Talk about them in the discussion, sure. Consider what sample size would be necessary to confirm them if they are real, absolutely. But they don’t belong in the abstract.

  10. Harriet Hall says:

    nybgrus,

    If you want to be less confrontational, would it be possible to hand out copies of the debunk you wrote? That would have the additional value of giving them something to review at home and think more about it.

  11. tmac57 says:

    Dr. Hall,I think you missed an important point in the anecdote about your friend.You say she had no colds with the Echinacea,while you had no colds without Echinacea.That shows that the amazing power of this herb is so strong,that it even works when you don’t take it.I’m surprised that you didn’t put that together.Glad I could help.
    P.S. Homeopathic Echinacea is even more potent.I have not been taking that all my life,and I have only had a handful of colds (don’t worry,I always wash my hands).

  12. Josie says:

    Nybgrus,

    I find it helpful to take the attitude of asking questions that encourage a person to delineate their assertions explicitly. If those assertions are without merit the person just might feel a little silly explaining them out loud.

    This past weekend I was visited by some evangelical Christians. I listened, and then asked them why it was bad that Adam and Eve recognized the ‘beauty that god gave’ them (something they had asserted). It was because god said so.

    I returned that to them that it sounded like a totalitarian government keeping its population ignorant and to be happy only with the state had given them.

    I went on to point out that it was Eve’s temerity in nomming an apple that we now have all sorts of knowledge of our world that stems from curiosity…including the ability to evangelize with a lot more efficiency than if we were running around nekkid and uneducated.

    This approach might not get you a lot of effect up front, but it might instigate a little more critical thought after the fact.

    I know from having it work on me that it can be very powerful in growing out of irrational beliefs.

  13. cervantes says:

    “You are supposed to explain the statistical test, and give the sample size and p-value and estimate the effect size, at minimum.
    Hall’s complaint is dead-on.”

    How do you know they didn’t do that? I presume they did.

    “If the p value they are using is .05, then why not check the data against other p values? Say you do that. It would still be arbitrary, because you could just pick and choose which p value you wanted to use that made your results look significant. If that was an acceptable practice, then there would be no standardization to prevent people from messing with the statistics to show a statistically significant outcome when they wanted to have one.”

    I’m not sure what you are trying to say here — you can if you wish set a different a priori standard than .05, and people sometimes do. Or you may be referring to the problem of multiple comparisons. If you make 20 comparisons, when in fact none of them represent any real relationship at all, you will on average get one p value <.05 just by chance. That's called a Type 1 error. You can adjust for that problem when making multiple comparisons, but the best thing is to restrict your conclusions to hypotheses stated in advance, and represent other "significant" findings as intriguing new hypotheses.

    In the latter case, the .05 standard is even more questionable. There's no reason not to point out a non-significant trend, as long as the effect size is large enough to be meaningful and the p value isn't too large (say .06 or even .1), and say that it merits further investigation. That's done all the time.

    Again, so-called "non-significant" findings are not the same as ruling out a hypothesis. That would be called a Type 2 error. But investigators often write that way: "No association was found . . ." when in fact an association was found, it just didn't happen to be less than the (arbitrary) p<.05 standard.

    Both errors are still errors.

  14. Tell it like it is says:

    @ Harriet Hall

    Hi Dr Hall – I trust you are well.

    “It would take many dozens of hours to adequately address the question, and I’m afraid that the sample size is too small”

    On ‘sample size’: the ‘normal distribution equation’ can be readily pressed into service. The formula can be found in any stats book worth its salt.

    From the text, it looks like a ‘chi-squared’ test was carried out to derive the ‘null hypothesis’ to determine the ‘uncertainty’ associated with a given event (pill and placebo effectiveness); where: a probability of 0 indicates that there is zero chance that the pill/placebo had the sought effect; and a probability of 1 means that it is ‘certain’ that the pill/placebo brought about the sought effect.

    In the case of the four tests, a score close to zero implies that there is uncertainty.

    A good ‘ploy’ to bring clarity is to multiply the ‘null hypothesis’ value by 100 to convert it into a percentile and then look at BOTH sides of the probability spectrum.

    Taking an arbitrary value of (say) ‘0.2’ we derive 0.2 X 100 = 20% that the ‘thing’ under evaluation is 20% ‘likely’ to ‘have the effect stated’ (i.e. cure the common cold).

    To balance the books we take 100 – 20 = 80% – which confirms how much the ‘thing’ under evaluation’ is UNLIKELY to ‘have the stated effect’.

    We then calculate the ‘odds’. In the example the ‘unlikely’ outweighs the ‘likely’ by four to one (80/20).

    The next thing that can be done, is to evenly distribute the data about a ‘central point’ so as to remove ‘skew’ or ‘bias’, by determining the ‘median’ (middle value) and use the ‘median’ value instead of the ‘mean’ (average) to calculate SD, chi-square, et al.

    A third thing that can be done is to perform ‘modal’ (sameness) analysis.

    To illustrate, lets say that women’s shoe sizes range from size 4 to size 8 and it is determined that the ‘average’ (mean) shoe-size is size 5.5 and the median (middle value) is size six.

    If you were a purveyor of women’s shoes, knowing the ‘average’ wouldn’t tell you much; knowing the median wouldn’t help much either. Whereas, knowing the ‘modal distribution’ will reveal the ‘percentile’ of women from the entire target group who take any specific shoe-size (i.e. have a positive experience from a given pill type).

    Now we have the ‘modal distribution’, we can perform a ‘Pareto Analysis’ to isolate the ‘significant few’ from the ‘insignificant many’ by sorting the modal data from high to low, dividing each mode by the total population, and displaying the results as a graph.

    By doing this, we see the ‘modal distribution’ and also determine the ‘proportions per mode’. This will reveal ‘exactly’ what sizes (pills) you should stock in your shop to maximise sales (cures) – and in what proportions.

    To illustrate, you may determine (as an hypothetical example) that the ‘Pareto Distribution’ of women’s shoe-size equates to one size four to two size fives, to 3 size sixes, to 4 size sevens, to 2 size eights – and that, for a given ‘demand’ of shoe style (pills and potions) you should therefore stock accordingly in the ‘proportions’ stated.

    Finally (but not exhaustively) you can apply Bayes Theorem and determine the ‘probability’ (likelihood) of effectiveness on each and every state under evaluation – from ‘no pill’ through ‘placebo’ and onto ‘trial cure’, whilst eliminating ‘false positives’.

    Given access to the raw data, all evaluations take under an hour to carry out in an Excel spreadsheet, which negates the statement “And resources too limited.”

    On the statement “I am advising several post-docs and leading several papers and this one just doesn’t merit the attention.”

    Whilst this study pretty much sums up the amount of time-wasting that took place with ‘limited resource’ – which includes wasting: manpower; funding; and energy; and tying up access to vital equipment as well as consuming the time of all of those taking part, I believe this ‘does’ merit attention for a variety of reasons – which include but are not limited to:

    1 Purpose of the experiment (i.e. what are we trying to find out)
    2 Merit in conducting the experiment (was the conclusion already known? If so, why are we doing it? and did the experiment contribute to mankind’s knowledge?)
    3 Robustness of the experiment (subjective evaluation: which includes using subjective assessment and using ordinal numbers to ‘score’ the ‘benefit’)
    4 Effectiveness of the data ‘evaluation’ (was the evaluation robust – see above)
    5 Effectiveness of the trial (i.e. what was determined?)
    6 Conclusion of results (did the experiment reveal what was being asked? Or reveal something else just as valuable? or leave doubt? or drive further study?)

    “The study is interesting, but it didn’t show that the placebo effect reduced the duration of common colds and it didn’t show the power of mind over body.”

    The data provided shows that placebos are a total waste of time so I would beg to differ on “it didn’t show that the placebo effect reduced the duration of common colds” because the values given imply that the placebo effect did NOT reduce the duration of common colds – nor did the ‘cure’.

    “It did show the power of mind to put a spin on study findings” Well said.

    And finally:

    A ‘magician’ should not perform the same trick twice for the same audience. Yet how many times do we see artless ‘magicians’ (lol) perform the ‘Ambitious card’ routine over and over and over – de ja vu – de ja vu – de ja vu (sometimes it takes four).

    Each and every time, just before the ‘magician’ brings the ‘marks’ card back to the top of the deck for the umpteenth time, the ‘magician’ utters a ‘magic’ word such as ‘avrakadavra’; or performs a ‘magic gesture’ such as a wave of a ‘magic wand’ (AKA painted wooden stick) over the cards; or the ‘magician’ ‘flicks’ their finger and thumb to make a staccato ‘snapping’ sound.

    With the ‘sleight’ already performed, the cards remain motionless – and yet – the ‘magic word or gesture’ conveys to the gullible audience that ‘magic’ took place at that point – reinforced by the 12th kick in the teeth that the signed card is once again back on top of the deck; that the signed card is once again back on top of the deck; that the signed card is once again back on top of the deck.

    The ‘magic word or gesture’ is placebo – and it is practised by jujus and ‘medicine doctors’ all over the world.

    Have a good week.

  15. nybgrus says:

    Thanks for the feedback all. I did pretty much know those things, but it is nice to have some seasoned and reasoned advice prior to jumping into this to keep me centered. After all, I do get rather passionate about this sort of thing, and yelling out that she is a blithering idiot for trusting natural standard without actually thinking about the biology of what she proposes (or calling her out for being a PharmD that doesn’t know what pharmacognosy is) would probably not be the best tack to take.

    I think I’ll hone in on the Echinacea and glucosamine since those are likely the easiest (and the glucosamine is a few slides long) and I will ask her how it is that herbal preps are CAM and not pharmacognosy.

    As for handing out the debunk in printed form, I don’t think I can do that simply for logistical reasons. I wrote it as a private email for my 1st year friend so it isn’t, shall we say, “polished” enough. But even then, it is a class of a few hundred, so it would be very costly and time consuming to print it and I just wouldn’t have time before the lecture. But that is something I will keep in mind for the next CAM lecture – thanks for the idea Dr. Hall!

  16. Harriet Hall says:

    @nybgrus,

    Is there any chance of getting a list of student e-mail addresses and sending them debunking information that way? If there’s no roster in place, could you offer a sign-up sheet at the lecture?

  17. nybgrus says:

    There is some potential there, Dr. Hall. The students have a facebook group and I have recently been active on it – once in regards to an elective placement sponsored by the school in China where you get to learn TCM along with real medicine and another where a student posted up a study (via medscape) that claimed fish oil would help your average medical student with no other health problems. In the former, it became a 60 comment thread with myself and a couple of my 1st year friends offering evidence based debunks of TCM and in the latter I read the article because some were quibbling over a potential COI and I discovered the results did not support the conclusion, so I commented on it. In both cases people thought I was “quick to condemn” and even got the “medicine is more an art than a science” line. One student in particular thought I was an a$$hole, even though my tone was purely neutral and I was referencing everything I said. I asked him if he would meet me in person and he agreed. He showed up with his friends deriding me and telling him to, essentially, “stick it to me.” We spoke for 2.5 hours and at the end he was completely turned around and thanked me for opening his eyes. Apparently, at a house party over the weekend one of my friends in 1st year ran into him and I got a text saying he was “singing my praises for 10 solid minutes.” We (myself, my 1st year friends, and now this shruggie turned activist) have been contemplating running our own CAM symposium for the student body – one where we would simply provide the history of various CAMs, contrast CAM with pharmacognosy, and have students evaulate for themselves without us leading a CAM-bash session. Perhaps I could write something more appropriate and then use my 1st year contacts to disseminate it, using my new convert as credibility, and segue that into our (envisioned) symposium.

    It is tough to make the proper impression on people, especially on a facebook group where people thought I was harshing on their mellow (so to speak), by offering reasoned criticism of some of the claims. That is why I am a bit reticent to spearhead this myself, since I don’t want a gut reaction and people to write me off as being that a$$hole critic. But as I said, perhaps through my 1st year contacts we could get something going. Thanks for the ideas Dr. Hall!

  18. geack says:

    @nybgrus,

    Your “CAM History Symposium” sounds like a very good idea. I’ve found that a high percentage of casual CAM followers and “shruggies” have no idea how shaky the foundations are under a lot of CAM practices. People tend to think o CAM as the collected folk widom of humanity, and they are often astonished to learn that things like homeopathy and chiropratic are traceable to specific people and places. Knowing the truth tends to cut through a lot of the “science doesn’t know everything” static: Instead of seeing “limited western dogmatic science vs. the mysteries of the universe”, they see “two hundred years of science vs. some guy who made this up in 1830″. It can be a huge push toward reason. Frankly, I think History of CAM 101 should be a required course in med school – here’s hoping your symposium can be a first step.

  19. pmoran says:

    Tiny quibble; “Statistical analysis showed that this was not due to chance.

    I suspect you meant something less strong e.g. “not likely to be due to chance. ”

    My first thought on the shortest illness duration being within those who believed in echinacea and knew they were getting it was that these subjects would be wanting to prove a point. But I suppose there may be a contribution from placebo responses, even in this condition.

    It is so hard to pin down true placebo responses experimentally, and I can understand why some prefer to downplay them altogether.

  20. pmoran says:

    NybgrusThat is why I am a bit reticent to spearhead this myself, since I don’t want a gut reaction and people to write me off as being that a$$hole critic.

    Excellent insight.

    I think the use of proper scientific language will help in your setting.

    For example even without looking I am sure Cochrane does not say “feverfew doesn’t work”, as implied in one of your earlier posts. It will say something like “we could find no convincing evidence that it works other than as placebo”.

    There is no reason why an accurate scientific statement cannot be made in almost any context. It is when the language gets too dogmatic, and loaded with subtext, that listeners may diagnose an extremist crusader and start switching off.

    If you can find good evidence of harmful effects from something “of disputed efficacy” there is also no reason not to point that out.

  21. nybgrus says:

    Geack: that is exactly what dawned on that 1st year I spoke of.

    I am sitting in on the lecture right now and I figured I would write about it in real time to give people an idea of what a CAM lecture is like in med school (since it has been asked a few times on this blog).

    The lecturer is giving a lot of simple facts about the lack of regulation, but is also slipping into a lot of the same stuff we keep harping on here. She calls vitamins and minerals “complimentary medicine” with a “strong evidence base” and says that things like Saw Palmetto work because of synergystic effects of the other chemicals in it and that it is “simply impossible” to separate that out and that “we’ve tried and failed through science.”

    And she just said that “flower remedies” are not homeopathic because homeopathic remedies have a “theory” behind them and someone has “thought of reasons for its application” whereas flower remedies are just willy nilly made up. I fail to see the distinction.

    Now she is going through “high,” “medium,” and “low” levels of “scientific” evidence and says that CM’s don’t have to go through these evidences since they “have their own type of evidence” called “traditional evidence” and there is only “medium” and “low” levels of evidence. She describes them as essentially being anecdote = good enough. There is the added bit that if a CM is offered by any foreign government that automatically means “medium” evidence on the “traditional” evidence scale.

    Now she is claiming that CM’s are just “listed” since CM can’t be patented and therefore there is no impetus for actually registering and checking the efficacy (cost:benefit), this is a way to make sure that CMs can still make it out to the public and people can use it based on the “evidence.” Where do we get this evidence? NaturalStandard of course.

    My 1st year counterpart just pointed out that the “evidence” that NaturalStandard uses (which the lecturer says is the best source of info for evidence of efficacy and safety of CMs) is, well, lacking. From the website:

    “Hand searches are conducted of 20 additional journals (not indexed in common databases), and of bibliographies from 50 selected secondary references. No restrictions are placed on language or quality of publications. Researchers in the field of complementary and alternative medicine (CAM) are consulted for access to additional references or ongoing research.”

    She says they “are excellent” and get updated monthly and are rigorous and is available through the library and is “really worth a look” and is a “fantastic resource.” She also says Medline Plus is a “pretty good” free website if you can’t get access to the pay sites (NaturalStandard). “Best bet is always the NaturalStandard.”

    My 1st year counterpart just asked a question about the glucosamine and she said that “there is plenty of evidence out there” to support it, and that it is most likely the sulfate salts not necessarily the glucosamine itself. She continued to say that researchers are “quite convinced” that glucosamine works. And pushed on a bit. He then asks why NaturalStandard is good enough and she says “because the NPS said so.” And now she is going on to say that it isn’t recommended as the only treatment but as an adjunct and that “it may not work for everyone” but “since your body makes heaps of it anyways, why not take it and give it a try?”

    Moving on to feverfew she says she is “quite convinced” that it works well and that it can very much help migraine. She quotes NaturalStandard which gives it an “A” but Cochrane disagrees. She then says that while you would recommend glucosamine, you probably wouldn’t recommend feverfew, even though they both get an “A” from NS.

    Now St. John’s Wort. She says the fact that it works as well as SSRI and TCA for mild-to-moderate depression is “powerful” and this is something you can “really recommend” to people. Once again an “A” from NS. She also comments that you can go to your local chemist or herbal shop and grab it right off the shelf and “that’s great!”

    Now she is dispelling the naturalistic fallacy – saying outright that natural =/= safe. She says that it may be “natural” to the plant, but not to us.

    She is now commenting again on St. John’s Wort and saying that it is great if you are “otherwise fit and healthy” but that it has “lots of interactions” and you should watch out for that.

    Now onto things that have no evidence, contrasting that with the 3 prior examples which have “pretty decent evidence that there is something there that makes us think there could be use.”

    Now she is saying that it is difficult if not impossible to actually get evidence for CMs since there is no patent rights to motivate the research, and because CM is all about “the holistic” approach and not just “a single pharmaceutical” that can be tested in an RCT.

    And finally closes with an example of why you don’t need an RCT for evidence and calls it an “artificial method” and claims that “thousands of years of traditional use” is the “natural” way to find evidence. And even if “there is no RCT to prove it works, if thousands of people over thousands of years believe it works, maybe it does. You should keep an open mind.”

    Sorry – one more section on placebo saying it is a “powerful” effect and saying the use of it in RCT can muddled by saying that “the effect of antidepressants may generally be overestimated and their placebo effects may be underestimated” because it is not an active placebo, just a sugar pill.

    My 1st year counterpart just called out the placebo question citing the NEJM article we have recently dissected here. She says, “it depends on what you are measuring and that pain reduction is pain reduction” and that she doesn’t know anything more about it. And closes by saying she would rather use acupuncture than an NSAID for her pain, since the NSAID will tear up your gut. And that just because a CM is listed and not registered doesn’t mean it doesn’t work, and to use a reputable source, like NaturalStandard.

    Time to go up to the podium

  22. tmac57 says:

    Nygbrus-Thanks.That was like a ‘greatest hits’ of everything that this site has been trying to bring attention to, about CAM.It’s no wonder that CAM has risen to the unwarranted heights that it has achieved,when it is being presented in such a credulous manner to med students.

  23. nybgrus says:

    tmac57: thanks! I was hoping someone would get something out of it.

    As an addendum, myself and my 1st year counterpart (who may make an appearance in the comments here at some point) chatted with the lecturer for an hour after. She is actually far less credulous in deeper discussions than she came of as during the lecture. I think she is simply not quite as rigorous as many of us here are, and does find some validity to the argumentum ad popularum and the “ancient wisdom” fallacy. I think she was receptive to much of our commentary and definitely was surprised at some of the medical anthropology I discussed with her (including the history and current use of acupuncture in China). All in all I got the impression that she was following the standard nomenclature put forth by CAM apologists and herself wasn’t as hard on her stance about it so it became rather wishy-washy. I called her out on this, and after some discussion, she commented that we should get ourselves into the professional boards that make recommendations since she doesn’t get that sort of reasoned, firm stance on the topics.

    I have told my 1st year counterpart to feel free to comment and add his own opinions and to contradict me in any way he feels necessary, including (but not limited to) bias in my live-blog commentary.

    @pmoran: You are correct of course, and I do know what the Cochrane review actually says. I was silent during the whole discourse, leaving said 1st year to handle asking the questions. He did quite well, I think, and stayed well within the science. There is no reason to fight bias with bias, especially when the evidence is actually on our side of the argument.

  24. TsuDhoNimh says:

    ARRGH! The classic pre-antibiotic use of echinacea was not for preventing the common cold. It does jack-diddly-squat for colds! It was part of the therapy for pneumonia, along with goldenseal or barberry and other anti-bacterials.

    It stimulates phagocytosis, which makes it useful for shortening the post-fever stage where you are coughing out the debris of virus-damaged cells, and for minimizing the chances of an opportunistic infection settling in.

    It shows in vivo and in vitro stimulation – I finally found the abstract, or PubMed finally got aorund to indexing it:

    http://www.ncbi.nlm.nih.gov/pubmed/3370076

  25. Brian83 says:

    Good morning, all. I am Nybgrus’s first-year med counterpart that he references in his previous two posts. Here are my impressions on my first CAM-centered lecture at medical school (it is worth nothing that it has come up other times in small-group sessions, but this is the first lecture we’ve been subjected to on the subject). I’ll try to avoid repetition with what was posted previously.

    Overall, the lecture was less CAM-pandering than I expected from the slides that were made available beforehand, but ultimately failed to meet it’s theoretical objective (it was supposedly titled “Evaluation of Natural Health Remedies” but the actual presentation was titled “Complementary Medicines – do they work?”. Unfortunately, my hope that the next slide would simply say “NO” went unfulfilled).

    One of my biggest criticisms of the lecture stems from the use of the Natural Standard as the go-to database for information on CAM modalities. She does mention Natural Medicines Comprehensive Database and Medline Plus as alternative options (eh), but continues to circle back to Natural Standard. I questioned her specifically about the strength of evidence that they reference, and that it appears to cherry-pick trials for its “evidence”, but she justifies by saying that the NPS (National Prescribing Service) here in Australia recommends it, so it must be a reasonable source. From my reading, it appears that Natural Standard simply requires ANY positive evidence to give a rating of “A – Strong Positive Scientific Evidence”. Specifically, they require:
    “Statistically significant evidence of benefit from >2 properly randomized trials (RCTs), OR evidence from one properly conducted RCT AND one properly conducted meta-analysis, OR evidence from multiple RCTs with a clear majority of the properly conducted trials showing statistically significant evidence of benefit AND with supporting evidence in basic science, animal studies, or theory.”

    They never define significance, and offer no balance in cases where a preponderance of the evidence is negative or equivocal. She doesn’t seem to view this as a problem – I disagree. The issue of the editor’s biases (being the editor-in-chief of the official publication of the American Association of Naturopathic Physicians) wasn’t broached during the lecture, but is also significant, as is the issue of their research methodology that I shared with nybgrus and he posted previously (I did enjoy the fact that “no restrictions are placed… on quality of publication.” Amazing, I’m so glad that is considered a good source of information by our lecturer AND by the NPS.)

    After the lecture, we discussed the Natural Standard issue further. She elaborated that she was part of a group that evaluated the evidence available on 12 “commonly used CM therapies” and gave that information to the NPS for their evaluation of databases. She said that Natural Source didn’t exactly match what they were finding in their review and was perhaps overzealous in it’s recommendations, but overall felt that “it is the best resource available.” I did press the issue further, since she was admitting that it’s not truly accurate, but she said that while it doesn’t mesh with their lit review, it’s sorta close, so it’s a good source. Clearly, I hold different standards of evidence than she does.

    She did go on to say that CM is unique because it treats in a holistic manner, etc. so RTCs cannot be relied upon to evaluate it. Of course, if there is a theoretical endpoint to the therapy, it is testable, but readers here already know that.

    She closed by talking about the power of placebos, that “30% of people get a strong positive response with just a placebo”. On the positive side, she explained active placebos, but on the negative she concluded that the placebo effect is powerful, as evidenced by the studies which suggest that antidepressants don’t work much better than placebos. Of course, that doesn’t address the general questions surrounding the use of psychoactive drugs and whether the chemical imbalance model is even accurate, as discussed by Harriet Hall here on SBM. The presenter’s claims don’t mean that placebos work, it just means that antidepressants may not work as well as advertised. I think this is lost on most of the students.

    I did call out the placebo issue, using the recent NEJM article discussed here as an example, to raise the issue of subjective vs objective change. I agree with the writers at SBM that subjective change alone is not adequate grounds for treatment – I pointed out that in patients with distinct disease that subjective improvement alone is not adequate, and that we ought to be concerned with making patients ACTUALLY better as opposed to just feeling better. Nybgrus captured her response, that she would prefer a placebo to a NSAID that might cause GI symptoms, and she ignored the issue of actual, measurable improvement in patients. Which is pretty much what I expected, because there is no answer for it.

    Hopefully this gives a little more insight into what CAM lectures are like in medical school. I appreciate any feedback the community has, and look forward to participating here in the future.

  26. Tell it like it is says:

    A week or so ago I presented ‘The General Custer puzzle’.

    Now some might have thought at the time “What has the ‘The General Custer puzzle’ got to do with the topic?” And some reading this might also think “What has ‘The General Custer puzzle’ got to do with ‘this’ topic?”

    The answer to both questions is?

    LOTS

    The challenge the puzzle offers encourages the contestant to seek out VALIDITY – what follows from what – using ‘reasoning’ – the basis of finding out ‘what is so’.

    Generally we do this by applying what we already know or believe – a word here which means ‘think we know’ – to persuade others that something is ‘so’ by providing believable ‘reasons’. We do this by offering ‘premises’ from which we draw ‘conclusions’. But what we ‘deem’ as being ‘so’ – aint necessarily so.

    When Dr Hall posted her postulations on the paper on the use of Echinacea to both prevent ‘and’ cure the common cold (a true ‘golden panacea’ – not!), the discussion that ensued was brought into sharp focus by the astute correspondent TsuDhoNimh, who wrote: “It does jack-diddly-squat for colds! It was part of the therapy for pneumonia.”

    This very powerful statement completely destroyed the validity of the entire experiment and castigated any published findings in one fell swoop. But the paper is out there in the public domain!

    The paper, which should perhaps be titled ‘Eat poo – a million flies can’t be wrong!’ has joined the ranks of a myriad of invalid articles – each supportive of the other – and cross-linking them all puts more wood on the fire to build a bigger ‘fire of deceit’.

    The doctor who wished to directly broach the contentiousness was left with both a moral and an ethical dilemma – and he sought opinion as to whether he should be ‘harshing on their mellow’ or whether he should take a ‘softly-softly’ approach?

    Both take guts – but the better approach is not to be confrontational – but rather – in my view – take the more effective approach to EDUCATE by using PROVENANCE to provide irrefutable ‘proof of origin’ (as used to establish the validity of a work of art for example) to conclusively establish – to use the words put forward by geack – that (quote) “homeopathy and chiropractic are traceable to specific people and places.” (unquote) so that the uninformed can clearly see (quote) “two hundred years of science vs. some guy who made this up in 1830″ (unquote).

    Doing this, plus the excellent suggestion to include the ‘history of alternative beliefs’ into the Med School foundation curriculum, for the most part, should dispel ignorance.

    So what should we do to highlight the errors in rogue articles? Simple: link the provenance to the articles – which in turn will drill its way deep into the heart of the bunkum – and in so doing, debunk the perpetrators of hogwash and codswallop, and leave the carrion for the crows.

    Its Friday, it’s a glorious day, and I trust you are all looking forward to a nice weekend so to get it off to a flying start, here is ‘The General Custer puzzle’ – complete with its solution.

    Have a great weekend.

    TILLIS

    General Custer is in a dangerous country and he arrives at the entrance point of two ‘gulches’.

    Down one of the gulches vast hordes of the enemy await – bows and arrows, fire, tomahawks, and scalp-knives at the ready.

    Down the other gulch is a clear path that, if taken, Custer’s brave boys can emerge from the other side of this gulch, and enter the other gulch from the other end so as to form a ‘pincer attack’ from two fronts to aid defeat of their foe.

    At the junction sit two Indians. One will ALWAYS LIE. The other will ALWAYS TELL THE TRUTH.

    The General is aware of this situation. He does not know who will speak true; and he does not know who will speak with forked tongue. All he knows is that he may only ask ONE Indian ONE question to establish which gulch to take.

    What is the question General Custer must ask to determine the CORRECT gulch in which to lead his men?

    The wording of this question is ONE of the methods that reveal LIARS and CHEATS. Custer failed to ask it (he was mislead on two counts) – and it cost him and his company dearly. Can you?

    The answer uses ‘inductive deduction’ – the technique used by Sir Arthur Conan Doyle’s fictional character Sherlock Holmes*.

    One small step for man:

    1 Tom videoed flash-flooding on her cell-phone.
    2 Tom used her cell-phone to video the flash-flooding.
    3 Tom has nicotine-stained fingers so Tom is a smoker.

    From the wording of statement ‘1’, we ‘deduce’ flash-flooding occurred somewhere on Tom’s cell-phone, and so, to make the deduction ‘deductively valid’, through ‘reasoning’ to ‘convince ourselves’, we may ‘conclude’ that Tom used another media to video the flash-flood event that took place on her cell-phone.

    From the wording of statement ‘2’, we ‘deduce’ Tom used her cell-phone to capture evidence of flash-flooding, so to make that deduction ‘deductively valid’ we may ‘reason’ that Tom was in a vicinity where flash-flooding took place – but she may have been videoing something off the TV or Internet – so the statement is not ‘deductively valid’.

    From the wording of statement ‘3’, we cannot ‘deduce’ that Tom is a smoker. She may ‘roll’ cigarettes for her smoker friend who is incapable of rolling her own; hence, the statement is not ‘deductively valid’. So, to validate the statement, we must use ‘induction’ to draw conclusions based upon ‘inductive validity’ – something that both doctors and detectives practice throughout their lives – and something that must be taught*.

    And now – show your love for the answer to the puzzle.

    TA DAAAAAH!

    “If I asked your friend to point to which gulch to avoid, which gulch would they point to?”

    The question can be asked of ‘either’ party and the answer given by either party will reveal which gulch the general can safely send half of his men.

    Or does it? Work through the logic and see.

    * A large step for mankind:

    A publication called ‘Sherlock Holmes and Philosophy’ by Professor Josef Steiff is to be published this coming fall – around October. This publication will assist people with making more considered value-judgments by acquiring and mastering the skill of ‘deductive logic’ and ‘inductive validity’ in a really fun way.

    Go see the new Sherlock Holmes film too.

    Professor Josef Steiff’s provenance? He is Associate Chair of the Film & Video Department at Columbia College, Chicago, and the author of ‘The Complete Idiot’s Guide to Independent Filmmaking’.

  27. woo-fu says:

    @nybgrus–Did I read something about a blog of yours? Do you focus on medical issues, or does it have a broader scope?

    @Brian83–Welcome! I’m impressed that you were able to at least get as far as you did in addressing and querying the presenter. Looks like there’s some common ground there. If she has a curious mind, she’ll have to follow the evidence.

    @TILIS–If you don’t already have one, you really should think about collecting these lessons and references together as a critical thinking blog. You already include activities and lessons in your comments here that would translate well for that function. List all the essential and simply favorite resources as a permanent guide.

    Then, if a discussion here has direct relevance to a post(s) you could make your comment and leave a link to your blog. If you can make it mostly SFW, teachers would have a comprehensive resource.

    Sometimes, what seems to happen here is that you catch hold of an error in critical thinking and then you are prompted to use that as a “teachable moment.” For general audiences, that might be a fun diversion, but for the audience here, a tighter focus on the post’s specific concerns seems the preferred mode.

    I say this not as a criticism, but as a recommendation and something you can really work with. I also say this as someone who tends to forget the economy of words when I post online, losing interest on the part of casual readers who might otherwise have agreed with my point.

  28. nybgrus says:

    @woo-fu:

    Some have been much too kind and suggested I start one. I have a blog, with my girlfriend, but it is a food blog showcasing our cooking and recipes. I am toying with the idea of starting my own blog which would focus on medicine and science, but also tell stories about my own trip towards critical thinking and being a scientist and perhaps a few other random tidbits. I am not sure yet, and have not yet committed myself to the idea of writing a blog, especially considering that in 3months time I will be studying for my medical boards in earnest and moving back to the US for good to complete my medical education and thus have a few months of very patchy blogging at best. If you are keen on the food blog I’ll toss you a link, but it all culinary.

  29. woo-fu says:

    @nybgrus

    I love cooking when I can, so please send the link. Thx!

  30. nybgrus says:

    Well, I hope you enjoy the recipes then. :-)

    Almost all the main dishes are mine, almost all desserts are the GF’s, though there is some crossover both ways.

Comments are closed.