Articles

Does Glucosamine Really Work?

Glucosamine and chondroitin, used separately or together, are among the more popular diet supplements. They are used widely for osteoarthritis, especially of the knee, and have been better studied than most other diet supplements. But do they really work?

The journal of my medical specialty, American Family Physician, recently published an article about the use of dietary supplements in osteoarthritis. They gave a “B” evidence rating to both glucosamine and chondroitin. This means there is inconsistent or limited-quality patient-oriented evidence. They recommended the use of glucosamine sulfate, saying, “Overall, the evidence supports the use of glucosamine sulfate for modestly reducing osteoarthritis symptoms and possibly slowing disease progression.” They did not exactly recommend chondroitin, although they said it “may provide modest benefit for some patients.”

I remain skeptical. And so does R. Barker Bausell, who devoted several pages of his book Snake Oil Science to an analysis of the research on glucosamine and chondroitin.

The American Family Physician recommendations were based largely on the results of meta-analyses. Meta-analyses are only as good as the studies they evaluate. Ioannidis has shown that published research findings are more likely to be false than true. It has been suggested that the Dona brand of glucosamine sulfate produced by the Rotta company showed positive results and that negative studies were done on other brands. But Bausell points out that trials involving the Dona brand were primarily older trials in non-English speaking countries where the percentage of positive studies tends to be higher. Those trials were superseded by better quality trials in the US, Canada, and the UK that were all negative.

An intriguing 2005 study looked at patients who were already taking glucosamine for knee osteoarthritis and who had experienced at least moderate relief of pain. They left half of the patients on the glucosamine and switched half to placebo. There was no difference in outcome.

The NEJM study in 2006 was the best-designed study yet. It deserves particular attention because it was reported in the press as both positive and negative! They originally hoped to look for an objective endpoint, joint-space narrowing, but they ended up measuring subjective pain relief instead. They tested 5 groups of patients on glucosamine, chondroitin, glucosamine and chondroitin together, a nonsteroidal anti-inflammatory drug (NSAID) or a placebo. The NSAID worked fastest and best, but not significantly so. None of the drugs worked significantly better than placebo. The placebo response rate was very high, which has been used to question the results, but which might also mean that there is a naturally high placebo response rate for this condition, which is known to involve fluctuations in severity over time.

The study was clearly negative, but when they broke the data from each of the 5 groups into 2 subgroups of patients with mild-to-moderate vs moderate-to-severe pain, they found one subgroup that showed significance: the group with moderate-to-severe pain that took both glucosamine and chondroitin. You might expect one out of ten subgroups to be falsely positive just by chance. Besides that, the results don’t make sense. I don’t know of any other treatment that is effective for more severe pain but not for lesser pain. And in previous studies, the combination of glucosamine and chondroitin had been no more effective than either supplement by itself. The authors themselves pointed out that their study was not designed to differentiate between those subgroups. so no clinical recommendations could be made on the basis of this finding.

Wallace Sampson, one of the other authors of this blog, has pointed out that the amount of glucosamine in the typical supplement dose is on the order of 1/1000th or 1/10,000th of the available glucosamine in the body, most of which is produced by the body itself. He says, “Glucosamine is not an essential nutrient like a vitamin or an essential amino acid, for which small amounts make a large difference. How much difference could that small additional amount make? If glucosamine or chondroitin worked, this would be a medical first and worthy of a Nobel. It probably cannot work.”

The existing evidence is compatible with the hypothesis that glucosamine and chondroitin work no better than placebo, and that the trials that seem to show otherwise are flawed for various reasons.

Critics will say that the evidence for glucosamine and chondroitin is as good as the evidence for many pharmaceuticals. That may be. Some pharmaceuticals may actually be no better than placebo. Rather than an argument for using glucosamine and chondroitin, this is an indication that we need to look as carefully at the evidence for pharmaceuticals as Bausell has looked at the evidence for glucosamine and chondroitin.  When we look at any research, we should remember all the possible sources of bias in published results, and we should remember that meta-analyses are only as good as the studies they are analyzing.

Posted in: Clinical Trials, Herbs & Supplements

Leave a Comment (52) ↓

52 thoughts on “Does Glucosamine Really Work?

  1. Wicked Lad says:

    Thank you, Dr. Hall, for elucidating the current knowlege of the effectiveness of chondroitin and glucosamine. I started taking both for shoulder pain a few years ago. When I read about the NEJM article earlier this year, I stopped taking the supplements. The pain has not returned, even though I continue heavy weightlifting.

  2. PalMD says:

    I have dozens of patients taking these supplements, and I don’t think anything I say will discourage most of them.

  3. kathleen says:

    I took glucosamine for a long time. It is hard to stop taking something that ‘everyone’ is telling you will help, particularly if you think your pain might worsen. It was only when I realised how much money I was spending on this every month that I began to question whether there was any evidence for efficacy. I couldn’t find much and so I stopped taking it. Now sometimes I have pain and sometimes I don’t but glucosamine certainly doesn’t make any difference.
    PalMD – is the cost of these supplements a problem for at least some of your patients? Perhaps pointing out how much they are spending on them plus a little lecture on the ‘latest findings’ might discourage some of them?

  4. Simon says:

    Even we vets are prescribing them now. A big problem is that so many doctors/vets do believe there is clinical evidence for them so to dissuade our clients is to undermine our colleagues. I’m usually happy about this if it is flagrant snake oil but the jury is still in session for this so I don’t want to create dissent in the profession. I’d rather just advise clients that the facts aren’t all in.

    And so begins my slippery slope…

  5. Roy Niles says:

    From where you stand, does that slope go down or up?

  6. apteryx says:

    The primary outcome measure of the NEJM study was a 20% decrease in the WOMAC pain subscale — which is small in comparison to the criteria recommended by OMERACT and OARSI (cited in this paper) of 50% improvement in pain or function or a more complicated combination of lesser improvements in multiple scales. 78% of the included patients had only mild knee pain. This undoubtedly explains the gigantic placebo response rate. A 20% improvement in pain that is already mild is very little indeed, and easily attained for many people just by waiting a few weeks.

    Dr. Hall disputes that a product could be more useful for relieving pain in people with more severe pain. Glucosamine has been shown in other human studies to significantly affect joint-space narrowing, but it is not reported to be a potent analgesic. If glucosamine’s benefits for patients with moderate or greater pain in this study were due to mechanistic effects on the progression of arthritis, then the patients with more severe joint damage might have seen more obvious benefits. Patients with mild pain may have very little joint deterioration, and so not notice improvement when taking a product that prevents or reverses that deterioration.

    Dr. Hall suggests that the significant benefit in the moderate-pain glucosamine plus chondroitin group is due to chance; the fact that there are ten subgroups (actually, eight for which p values were generated) means one might well show positive results at the .05 level by accident. I suspect she would have been less likely to point that out had the moderate-pain celecoxib group been the positive one! Alas, the p value for celecoxib [all following discussions for the moderate-to-severe pain group only] was only 0.06 for the primary outcome, not significant, whereas the G+C group’s p value was 0.002. (In other words, an outcome extreme enough to be expected by chance only 1 time in 500, not 1 time in 20, meaning that the fact that there are ten subgroups becomes somewhat less portentous.) Celecoxib did not cause significant (p<.05) improvement on any of the many secondary measures but one, the OMERACT-OARSI criteria (p=.03, versus .001 for the G+C). G+C showed significant improvement on several and was generally better than celecoxib; the exception is joint swelling or effusion (p=.91 [no benefit of G+C] versus p=.06), which is not a surprise as celecoxib is a potent anti-inflammatory. Speaking of the OMERACT-OARSI criteria, this secondary outcome was also significant for G+C among all patients in the trial (p=.02).

    Dr. Hall also does not wish to accept these results because the G+C group showed better results than glucosamine alone, whereas she says that “in previous studies, the combination of glucosamine and chondroitin had been no more effective than either supplement by itself.” However, in a PubMed search I find no RCTs that have included both combination treatment and glucosamine alone, so apparently the issue has not been adequately tested. Of course, if the putative additive benefit of chondroitin had already been disproven, this study would not have spent a lot of extra money adding chondroitin arms. On a number of measures, the NEJM study showed a nonsignificant trend towards benefit with glucosamine alone (but much less with chondroitin alone). For example, the percent of placebo patients with a 20% decrease in WOMAC pain score was 54.3%, versus 61.4% for chondroitin (p=.39), 65.7% for glucosamine (p=.17), 69.4% for celecoxib (p=.06), and 79.2% for G+C (p=.002). The proportion of patients with an OMERACT-OARSI response was 48.6% for placebo versus 58.6% for chondroitin (p=.24), 65.7% for glucosamine (p=.04 – not flagged as significant), 66.7% for celecoxib (p=.03), and 75.0% for G+C (p=0.001).

    For that specific outcome, the number needed to treat with G+C therefore would be 4, making G+C supplements a far more effective expenditure than plenty of Big Pharma’s most popular current offerings. Now, I do not purport to know how chondroitin may add to glucosamine’s previously demonstrated cartilage-protecting benefits, but it seems to me that when you get significant results, offering the possibility of providing relief that is safer and cheaper than the current treatment, the proper attitude is to try to further replicate them, up to a point, and to start looking into the mechanism, not to assert that the results can’t possibly be real if we don’t already understand the mechanism.

  7. dcardani says:

    Simon said:

    Even we vets are prescribing them now

    Tell me about it! My wife recently took our aging dachshund in for a checkup and the vet said that some people use it for their pets and sometimes it seems to work, though the evidence was iffy. She purchased a bottle on the thought that we’d see an improvement or not, and then we’d know whether it was worth doing any more. It wasn’t until she got home and we looked at the bill that we realized the bottle they sold us was $75! I wouldn’t have cared if it had only been $5-$10. It just seems unethical to me to say, “Here’s something that might not work. Would you like to purchase it from us at a very high cost?” This vet has otherwise been very helpful, so it’s sad to see them pull this kind of stunt.

  8. apteryx states, “…it seems to me that when you get significant results, offering the possibility of providing relief that is safer and cheaper than the current treatment, the proper attitude is to try to further replicate them, up to a point, and to start looking into the mechanism,..”

    It seems to me that more and better studies are needed and that if such studies consistently show benefits that the mechanisms should be investigated and once known used as the starting point for the development of a synthetic drug that is standardized for purity and potency and that maximizes the benefits of the botanical.

    G & C supplements seem to be “standard practice” in veterinary medicine at least where I live. Does anyone know if there are good studies supporting such use in animals?

  9. psamathos says:

    I agree with apteryx that there may be some specific mechanism behind giving the effect in the moderate-to-severe pain category with g+c in combination, and this may deserve further study. But if this was not a planned comparison, it is dubious to use it as evidence unless it is replicated consistently. It is tantamount to data mining unless the study is designed to specifically examine this comparison, which it was not.

    As an aside and since we’re on the topic of veterinarians, a vet in my area supplies chiropractic and homeopathic “treatment” in addition to more usual techniques. Adding dubious supplements to the mix is a natural next step. It strikes me that this is more for the benefit of the animal’s owner than it is for the animal itself.

  10. Simon says:

    psamathos: It’s not Roger Meacock is it? He’s my self-adopted nemesis. The man has a degree from Bristol Veterinary School, one of the most rigorous academic degrees in England and yet he peddles crap like this:
    http://www.naturalhealingsolutions.co.uk/main/page_treatments_summary.html
    If he got into Bristol vet school then this man has a brain- there’s no way he can’t know that he’s conning his patients.
    As for whether C+G are common practise among vets and whether there’re any studies I’ll do some homework and get back to you.

  11. pec says:

    Yes, I very much agree. We should be skeptical of all sources of information, including scientific research.

    This is slightly off topic, but related. This morning a news report said that cancer patients who have medical insurance are 60% more likely to survive at least 5 years after diagnosis than are uninsured cancer patients.

    This is a very typical American Cancer Society statement about the effectiveness of cancer treatments, and these news reports never notice the bias.

    Cancer patients with medical insurance are likely to be diagnosed earlier than those without. So their greater odds of surviving 5 years could be completely unrelated to the treatment.

    The standard cancer treatments might be highly effective, or modestly effective, or minimally effective, or even harmful in some cases. We cannot know from the ACS claims.

    I think we should all start to be more skeptical — not only of alternative treatments that have not been studied scientifically, but also of mainstream treatments that have been studied scientifically. As Dr. Hall wisely points out, scientific research varies greatly in quality and can easily be misinterpreted.

  12. BrianB says:

    First, thanks for this new blog. I am always happy to have specific ammunition in the fight against the alternative-medicine charlatans.

    Very interesting discussion. I am 50, and have been on Glucosamine for 10 years, since first being diagnosed with osteoarthritis in my right hip. I’ve always felt that it was working like a charm, as the symptom (not really pain, but a general angst in my right leg) has completely disappeared since taking the medication. The only times that the symptom has re-appeared have been when I’ve been, say, on vacation and either forgot to bring or ingest the pills.

    I realize that one person’s testimony does not constitute evidence, so me “swearing by it” doesn’t mean anything in the big picture. Since my doctor prescribed this (not some homeopath), I assumed it had the usual clinical trial history to make it to his prescription pad. It was a real eyebrow-raiser to see this article and discover that the science is dubious. I will now have to do some experimentation, not to mention bringing this up with my doctor.

  13. fls says:

    Dr. Hall suggests that the significant benefit in the moderate-pain glucosamine plus chondroitin group is due to chance; the fact that there are ten subgroups (actually, eight for which p values were generated) means one might well show positive results at the .05 level by accident. I suspect she would have been less likely to point that out had the moderate-pain celecoxib group been the positive one! Alas, the p value for celecoxib [all following discussions for the moderate-to-severe pain group only] was only 0.06 for the primary outcome, not significant, whereas the G+C group’s p value was 0.002. (In other words, an outcome extreme enough to be expected by chance only 1 time in 500, not 1 time in 20, meaning that the fact that there are ten subgroups becomes somewhat less portentous.)

    The p-value refers to the probability of the difference on a randomly drawn sample. This sample was not randomly drawn, but rather was selected on the basis of pain severity. That drawing a biased sample can result in a group that is different to a greater degree than drawing a random sample is not a remarkable discovery. That is, the p-value does not accurately reflect the improbability of this event.

    Linda

  14. fls says:

    Dr. Hall,

    I have a bone to pick with the oft-repeated “Ioannidis has shown that published research findings are more likely to be false than true”. I don’t doubt that you understand what he really demonstrated, but maybe you could consider elaborating on that, instead of repeating the phrase under circumstances where it is untrue (or at least misleading). You make that statement in regard to meta-analyses, yet this is the area Ioannids lists as that where published research findings are more likely to be true than false, or at least evenly balanced if you are talking about a meta-analysis of small, inconclusive studies.

    Ioannidis demonstrates which characteristics are likely to generate a preponderance of false-positives and the various fields operating under those characteristics. The value he provides is due to the specificity of his analysis. Applying his statement without regard for that specificity destroys the usefulness of what he has done.

    You and others here have given some tantalizing hints that you are going to discuss this in greater detail. I’m still waiting, so I’m hoping that I might spur you on by being critical. :-) I have written about this issue a fair bit on the JREF forums, and I would welcome some much needed back-up.

    Linda

  15. apteryx says:

    rjstan – These are the two arguments together used to reject all evidence in favor of dietary supplements: the evidence is never enough, and if it were, it would only mean that the product should be used as the basis for a synthetic, standardized [patented, prescription-only] single-compound drug. I would dispute this even for botanicals, as many have numerous active compounds; you can get a very well-standardized German ginkgo extract, but you cannot get a single synthetic molecule with an equal activity and safety profile. However, glucosamine already IS a single molecule, so glucosamine products should be standardized as well as aspirin tablets. (If some glucosamine products, or some aspirins, differ too greatly from the stated label content, that would be the failing of the manufacturer, not the molecule.) I don’t know whether it’s synthetic or not, but according to your philosophy, it should make no difference. Some people’s real complaint seems to be that the substance is available to consumers without an MD-gatekeeper.

    To answer your other question, there are animal studies showing benefit of both glucosamine and combination products. Search on PubMed and you will find that one of the most recent is a double-blind study (so observer bias cannot be claimed) showing improvement of stride in veteran horses; there has also been a double-blind study in arthritic dogs. BTW, I note a 2006 study by Homandberg et al finding that a G&C combination reverses fibronectin fragment-mediated cartilage damage better than either molecule alone.

    fls – The p values I cite for the moderate-pain treatment groups are for improvement relative to the improvement in the moderate-pain placebo group. How are the treatment groups then biased by selection for pain severity relative to the placebo group? I am not a statistician, so if you wish to accuse Clegg et al of faking or badly flubbing their stats and NEJM of overlooking it, I am not prepared to defend them. However, I think that unless you redo their data analysis you are on shaky ground in asserting that their numbers are meaningless.

    Admittedly, the group with significant pain from knee arthritis in this study was small enough that it can’t be taken as providing any ultimate answer, nor indeed can any single study. But the point of statistics is to give an objective, even if imperfect, measure of whether the effect was big enough to have a good chance of being real, irrespective of the observer’s biases. If the moderate-pain Celebrex group had had the best results, you would not be announcing that this study provided no shred of evidence for the use of Celebrex. Likewise, if the G&C group had shown no evidence of benefit, the size and structuring of this study would probably have been just fine with you. You may not like the precise p values, but you have to accept the fact that just in the raw numbers, G&C provided more long-term relief than Celebrex, so no statistical manipulation will erase the supplement’s apparent benefit while supporting the use of the pharmaceutical.

  16. qetzal says:

    pec,

    Being skeptical doesn’t mean reflexively doubting whatever you read. Before you dog a news report and say “we cannot know from the ACS claims,” why not actually read what’s being reported?

    Here is the ACS press release on this topic. Among other things, it states:

    oDifferences in survival between privately insured and uninsured women were seen for all stages of breast cancer.

    [snip]

    oDifferences in survival between privately insured and uninsured patients were seen for all stages of colorectal cancer.

    The PR also gives the original citation. Via PubMed, we find that the entire publication is available for free download here.

    Sure enough, the authors find that even when you control for stage at diagnosis, uninsured patients have lower survival (Figures 13 & 16).

    Asking pertinent questions is commendable. Constantly suggesting that scientists are incompetent or misrepresenting their results when the data show otherwise is not.

  17. fls says:

    “fls – The p values I cite for the moderate-pain treatment groups are for improvement relative to the improvement in the moderate-pain placebo group. How are the treatment groups then biased by selection for pain severity relative to the placebo group?”

    Any time you select your groups in a non-random manner (and post-hoc, sub-group analysis represents a non-random selection) you introduce an element of bias which confounds the natural variation. This makes statistics which assume a normal variability less reliable. It is a subtle point, but is one of several reasons that sub-group analysis is interpreted with a large grain of salt.

    [QUOTE]I am not a statistician, so if you wish to accuse Clegg et al of faking or badly flubbing their stats and NEJM of overlooking it, I am not prepared to defend them. However, I think that unless you redo their data analysis you are on shaky ground in asserting that their numbers are meaningless.[/QUOTE]

    I accused them of no such thing. They likely recognize this issue, since despite obtaining a p-value of 0.002, they concluded “our finding that the combination of glucosamine and chondroitin sulfate may have some efficacy in patients with moderate-to-severe symptoms is interesting but must be confirmed by another trial.”

    “Admittedly, the group with significant pain from knee arthritis in this study was small enough that it can’t be taken as providing any ultimate answer, nor indeed can any single study. But the point of statistics is to give an objective, even if imperfect, measure of whether the effect was big enough to have a good chance of being real, irrespective of the observer’s biases. If the moderate-pain Celebrex group had had the best results, you would not be announcing that this study provided no shred of evidence for the use of Celebrex.”

    Your accusation is unwarranted considering that I have not given you any reason to think I would do this.

    “Likewise, if the G&C group had shown no evidence of benefit, the size and structuring of this study would probably have been just fine with you. You may not like the precise p values, but you have to accept the fact that just in the raw numbers, G&C provided more long-term relief than Celebrex, so no statistical manipulation will erase the supplement’s apparent benefit while supporting the use of the pharmaceutical.”

    The study was structured to reliably answer one question. You seem to have elected to ignore the answer to that question.

    Linda

  18. apteryx says:

    Linda – What is the one question you think this study was structured to reliably answer? It had better involve only benefits for patients with mild knee arthritis; otherwise, you will have very clearly given me reason to assume bias. However, the authors do not seem to believe that their own negative results should be taken as the last word to prevent all further research on those patients. They do say with regard to the negative results, “Our study has a number of limitations”, which they then enumerate.

    It is not clear to me just how post-hoc the subgroup analysis was. The patients were stratified by WOMAC pain score ahead of time. At the end of the methods, Clegg et al. say “We also analyzed the results according to the WOMAC pain stratum, since logistic regression analysis showed a significant (P=0.008) interaction between treatment and pain stratum….” At the beginning of the discussion, they refer to “[a]nalysis of the *prespecified* subgroup of patients with moderate-to-severe pain.” It sounds to me like they had this in mind from the beginning, and would simply have omitted it, saving pages of space in Table 2, had a preliminary logistic regression found no differences between pain groups. Yes, they say that their finding must be confirmed by another trial, but that is what they would say of ANY positive results from a trial this size. It means only that they do not claim to have provided so much data as to make the issue beyond further question. It does NOT mean that they believe their own statistics are so bad that a claimed p=.002 result would under a better analysis have been a p=.06 result or worse.

    I would like to ask, since this is a single-molecule (or two-molecule) treatment with considerable evidence for a mechanism of action, why do you seem to be emotionally invested in negative results? Why must positive results be not just unreliable but unworthy of follow-up? What makes you so certain glucosamine shouldn’t work?

  19. fls says:

    “Linda – What is the one question you think this study was structured to reliably answer? It had better involve only benefits for patients with mild knee arthritis; otherwise, you will have very clearly given me reason to assume bias.”

    It would be the one indicated in the abstract – “are glucosamine and chondroitin sulfate alone or in combination efficacious and safe for osteoarthritis knee pain?” Does that correspond to what you mean by mild knee arthritis?

    “It is not clear to me just how post-hoc the subgroup analysis was. The patients were stratified by WOMAC pain score ahead of time. At the end of the methods, Clegg et al. say “We also analyzed the results according to the WOMAC pain stratum, since logistic regression analysis showed a significant (P=0.008) interaction between treatment and pain stratum….” At the beginning of the discussion, they refer to “[a]nalysis of the *prespecified* subgroup of patients with moderate-to-severe pain.” It sounds to me like they had this in mind from the beginning, and would simply have omitted it, saving pages of space in Table 2, had a preliminary logistic regression found no differences between pain groups.”

    You might be right. I interpreted that section differently when I first read it. I tend to look at the power analysis in order to determine what the authors were really after – it’s the hardest to fudge after the fact. :) Their power analysis did not include consideration of stratification on the basis of pain. I don’t disagree that their subgroups analyses weren’t determined beforehand (you have to figure out what to measure, after all). The question was whether the groups were formed before or after randomization. Reading it again, I think they were formed beforehand, but were inadequately powered, increasing the probability of false-positives.

    “Yes, they say that their finding must be confirmed by another trial, but that is what they would say of ANY positive results from a trial this size.”

    Really? I think that positive findings for any of the drugs (besides Celecoxib) would have reasonably led to a conclusion of efficacy.

    “I would like to ask, since this is a single-molecule (or two-molecule) treatment with considerable evidence for a mechanism of action, why do you seem to be emotionally invested in negative results? Why must positive results be not just unreliable but unworthy of follow-up? What makes you so certain glucosamine shouldn’t work?”

    Why are you assuming I’m emotionally invested in negative results just because I’m pointing out the problems with drawing conclusions from conflicting sub-group analyses?

    Linda

  20. BlazingDragon says:

    I tried glucosamine/chondroitin once… one problem I noticed with it is that it can cause gastric side effects (I tried it after my wife, who has a Ph.D. in immunology went to the ACR meeting a few years ago, where a study with the purported “real” benefits of glucosamine/chondroitin were the talk of the meeting). It made me quite sick to my stomach… I think this is a major side effect of glucosamine/chondroitin, so it could un-blind patients getting the “real” pill vs. placebo. If the patients were even partially unblinded by side effects, the whole study would need to be thrown out.

    It’s also interesting that Celebrex didn’t meet statistical significance . I’ve thought for years that pharma companies were hyping the COX-2 inhibitors past what the evidence actually demonstrated (in their defense, it was logical to assume that a “super-aspirin” would do everything aspirin did + more).

    This study leaves me even more confused than I had been previously about the benefits (or not) of glucosamine and/or chondroitin. I’m leaning toward “no better than placebo” at the moment.

  21. fls says:

    “It’s also interesting that Celebrex didn’t meet statistical significance . I’ve thought for years that pharma companies were hyping the COX-2 inhibitors past what the evidence actually demonstrated (in their defense, it was logical to assume that a “super-aspirin” would do everything aspirin did + more).”

    Celebrex was significantly better at relieving the pain of osteoarthitis. It was the only drug which was in this study.

    Interesting that the main outcome of this study was obscured by haggling over the rest of the results. :)

    Linda

  22. apteryx says:

    Linda, I asked you what question you thought had been “reliably answered” by this study, and you gave the following answer:

    It would be the one indicated in the abstract – “are glucosamine and chondroitin sulfate alone or in combination efficacious and safe for osteoarthritis knee pain?” Does that correspond to what you mean by mild knee arthritis?

    Well, no. Knee pain can come from mild knee arthritis, or from moderate to severe knee arthritis with significant cartilage damage. It seems to be your assertion that this study has “reliably” invalidated the use of glucosamine and chondroitin for people with arthritic knee pain, period, which would include those with moderate to severe pain — even though the actual results of this study showed that G&C had a significant effect and were better than Celebrex in those patients. Do you mean to make that claim?

    BlazingDragon – You can’t throw out every study where patients were “even partially unblinded” by side effects, or you would have to toss virtually every study comparing a pharmaceutical to a placebo. In this study, you are concerned that G&C may have had a higher rate of side effects, thus presumably telling a few patients that they were getting an active drug (or, potentially, worsening their perceived wellbeing). As it happened, the authors state that “Adverse events were generally mild and evenly distributed among the groups,” and that “The number of patients who withdrew because of adverse events was similar among the groups.” [In fig. 1, the number withdrawing in the chondroitin-only group looks higher than the others to me, but apparently it's not significantly so.] As an aside, if you look at the discussion of cardiovascular adverse events, it looks like assignation of causality may have been biased.
    This is the opposite of the frequent argument regarding supplement vs. drug studies, which is that unblinding occurs through the greater side effects of the drug! There are a plethora of European and British studies comparing St. John’s wort to antidepressant, showing similar activity. Some reject these studies on the grounds that when the antidepressant users showed up twitching and impotent, the evaluating doctors could have recognized that they were in the drug arm, and therefore underrated their improvement (since the doctors have already shown, just by being willing to participate in the study, that they are biased in favor of quackery). Ironically, some of the same commentators also complain that the antidepressant doses were too low, i.e., not enough of the drug users were twitching and impotent.

  23. fls says:

    “Well, no. Knee pain can come from mild knee arthritis, or from moderate to severe knee arthritis with significant cartilage damage. It seems to be your assertion that this study has “reliably” invalidated the use of glucosamine and chondroitin for people with arthritic knee pain, period, which would include those with moderate to severe pain — even though the actual results of this study showed that G&C had a significant effect and were better than Celebrex in those patients. Do you mean to make that claim?”

    The study had a power of about 80 percent to detect a significant difference, so it is reasonable to conclude that the drug is not effective from a negative result taking all comers. If you consider the group with moderate to severe pain as an underpowered, but well-performed RCT (which is essentially what it is), then the results are still more likely to be a false-positive, than a true-positive (about one in four will be a true-positive – from the Ioannidis paper referenced earlier). At what point do you say enough is enough (not a rhetorical question)?

    Linda

  24. pec says:

    qetzel,

    “Differences in survival between privately insured and uninsured patients were seen for all stages of colorectal cancer.”

    Yes, figure 13 does show increased survival for insured cancer patients, no matter what stage they were diagnosed at. And the differences are smaller the earlier the diagnosis. WHAT? Haven’t we been told that early diagnosis greatly improves the chance of successful treatment?

    However this figure definitely shows that being insured helps by about the same amount whether a patient is diagnosed early or late. In fact, it helps more for patients diagnosed late.

    But I thought we had been told that late stage cancer is seldom cured, while early stage cancer is often cured. What’s going on?

    Obviously, we should be skeptical of this report. Something could be causing the difference between insured and uninsured patients other than, or in addition to, the treatments.

    You are not nearly as skeptical of mainstream research as you are of CAM research. You should read a little more carefully, because the cancer industry’s deception can be pretty subtle.

  25. apteryx says:

    At what point do I say enough is enough? Well, to me, when there are ten or twelve studies favoring a supplement with little countervailing evidence, I figure the issue is pretty well settled. However, since financial realities mean all those studies are pretty small, I don’t regard it as 100% settled — just well enough to serve to advise the public. Three largely unopposed negative studies, IF the material quality and dosage are top-notch in every study, pretty well settle the issue too. One negative study could not possibly settle the issue if there were already several positive studies.

    Now, a recently fashionable issue is that trials ought to use a homogenous patient group. To give an extreme example, if you were testing an antibiotic in people with pneumonia, you would not want to find that half your subjects had viral pneumonia and half bacterial. Here we have a case where most patients had mild knee twinges and some had seriously narrowed joint spaces. These are not the same state and may not respond identically to a treatment. You would not be happy to be told that your antibiotic, which dramatically reduced suffering in your patients with bacterial infections, was “proven worthless for all” based on the fact that the folks with viral infections weren’t helped, especially if you had other trials in which it worked.

    I won’t try to decipher the argument whereby a p=.05 result is 75% likely to be due to chance, although I’ve read Ioannidis’ article and do not believe that his simulations constitute, as he claims, “proof” of what happens in real life. (Perhaps one could select a sample of RCTs from decades past and see how many of their results had since been clearly disproved?) I will just say that if you really believed that three-quarters of all positive results were false positives, then you — assuming that you are a clinical MD — would never dream of prescribing any of those antidepressants for which there were only two or three positive studies (even ignoring the fact that there were an equal number of suppressed negative studies). Right? Most readers will see your firm rejection of positive results for supplements as bias, if you do not show equal hostility to positive results for novel pharmaceuticals. You cannot honestly declare that results of the same significance are to be embraced, replicated, or tossed in the garbage without further investigation based on the legal category of the substance tested. And as a general principle, if you believe that Western science can provide *any* meaningful knowledge, it does so only through research results, so you had better treat research results as possible knowledge gained, pending confirmation.

  26. Roy Niles says:

    pec says: “You are not nearly as skeptical of mainstream research as you are of CAM research. You should read a little more carefully, because the cancer industry’s deception can be pretty subtle.’

    I’m wondering that if you see no significant deception in the CAM research, how do you spot subtle deception at all? Does it give off an aura somewhat like that of life energy? Is it sort of foggy or misty? Does it come in any sort of distinguishing color? Is there a degree of brightness we should look for?
    Do you need a degree from a diploma mill to authenticate any such observations? Or can you capture it in a photograph? Does it show up in ultra violet light? Does it have a band or electromagnetic spectrum? Do you wear any type of special glasses, or were you and those other thousands just born with some extra set of genes?

    Forgive me for giving of an aura of doubt, but i just have this pesky problem with curiosity.

  27. pec says:

    “you see no significant deception in the CAM research”

    WHEN DID I EVER SAY THAT?? OF COURSE I SEE DECEPTION IN SOME CAM RESEARCH.

    What a ridiculous accusation.

  28. Roy Niles says:

    I have a heightened sense of the ridiculous, I suppose. I really think I can see its aura.
    Can you tell me what an aura really looks like so I’ll know one when I see one?

  29. fls says:

    The problem is that low-quality studies produce biased results, and that bias is in the direction of producing positive results. So pretty much anything you wish to study, regardless of whether or not it has a true effect, will show an effect if your studies are of low quality. This means that the mere presence of “ten or twelve studies favoring a supplement” tells you very little about whether or not there is a real effect. If you want, you can make it so that all CAM studies are positive, like the Chinese do.

    Once you move into the realm of good-quality studies, where there is little bias and the power is adequate to discover most true effects, then the results help you to distinguish between those treatments that are effective and those that are not. When weighing the evidence, one good-quality study trumps any number of low-quality studies in its ability to lead to reliable and valid conclusions.

    “I won’t try to decipher the argument whereby a p=.05 result is 75% likely to be due to chance, although I’ve read Ioannidis’ article and do not believe that his simulations constitute, as he claims, “proof” of what happens in real life. (Perhaps one could select a sample of RCTs from decades past and see how many of their results had since been clearly disproved?)”

    He has published several papers applying his model to different fields of research and confirming its predictions. Also the field of CAM research serves as an ideal example. The research prior to the formation of the NCCAM was mostly low-quality and many positive claims were made based on the results. Most ( if not all) of those claims have been overturned when subject to the high-quality research stimulated by the NCCAM.

    “I will just say that if you really believed that three-quarters of all positive results were false positives,”

    You misunderstand. This applies to a particular type of study. Higher quality studies give fewer false-positives, lower quality studies give more.

    “then you — assuming that you are a clinical MD — would never dream of prescribing any of those antidepressants for which there were only two or three positive studies (even ignoring the fact that there were an equal number of suppressed negative studies). Right?”

    The quality of the studies supporting the use of anti-depressants is different – they are much larger and contain less bias. This allows you to draw reliable and valid conclusions from the results in a way that you cannot from many CAM studies (including those for glucosamine and chondroitin). Positive results are seen in the larger studies, and the effects remain significant if the studies are combined.

    The other major difference is prior probability. New drugs are not developed in a vaccum. Phase III trials are preceded by Phase I and II trials which are preceded by animal and in vitro studies. Biological plausibility is established. All kinds of information converges to help you select out fruitful areas of exploration.

    Without that sort of supporting information, CAM studies have to stand or fall on their own, and they are usually not up to the task. It is not that I treat CAM differently, it’s that CAM is distinguished by being qualitatively different.

    Linda

  30. qetzal says:

    pec wrote:

    Yes, figure 13 does show increased survival for insured cancer patients, no matter what stage they were diagnosed at. And the differences are smaller the earlier the diagnosis. WHAT? Haven’t we been told that early diagnosis greatly improves the chance of successful treatment?

    However this figure definitely shows that being insured helps by about the same amount whether a patient is diagnosed early or late. In fact, it helps more for patients diagnosed late.

    Um, no. You need to compare the difference in number of months for a given survival rate. In other words, horizontal differences between curves, not vertical distances.

    Example: Fig 13, top panel. For insured women diagnosed at Stage 1, maybe 3% have died by 60 months. For uninsured, 3% have died by ~ 30 months, a 2.5 year difference. For women diagnosed at Stage 4, the difference looks to be no more than 15 months.

    But I thought we had been told that late stage cancer is seldom cured, while early stage cancer is often cured. What’s going on?

    What’s going on is that you are a denialist, and you’re looking for ways to misinterpret this report to fit your preconceived bias against anything the medical establishment says.

    Obviously, we should be skeptical of this report. Something could be causing the difference between insured and uninsured patients other than, or in addition to, the treatments.

    I agree we should be appropriately skeptical of any report. Obviously, factors other than insurance status will make a difference. The question is not whether insurance is the only factor – it’s whether insurance is one significant factor. The authors clearly think they’ve adequately controlled for other factors, so they conclude that it is.

    Are they right? Frankly, I don’t know. It’s certainly reasonable to think it might. Does this study prove it? Probably not by itself.

    Does that justify your rash accusations that the authors didn’t control for stage at diagnosis? No. And especially not when the research was freely available to anyone who half bothered to look.

    You are not nearly as skeptical of mainstream research as you are of CAM research.

    That’s because mainstream research is almost always orders of magnitude better than CAM research. As you yourself seem to know, even if you won’t openly admit it:

    You should read a little more carefully, because the cancer industry’s deception can be pretty subtle.

    Too subtle for you, I guess, since your claims of ‘deception’ keep proving to be unfounded. If only CAM’s flaws were this subtle!

    Let me be clear. I agree that not all medical research is trustworthy. Some times it’s just poorly designed or executed. Some times there is clear bias. Some times there is outright fraud and deception. It’s a mistake to automatically accept a result just because it comes out of ‘mainstream’ medicine. But it’s equally a mistake to automatically discount a result for the same reason. You would be wiser if you understood that.

  31. qetzal says:

    Sorry for the formatting error in my last post. I omitted a close quote tag.

    Once again I will plead with the blog authors for a preview function, if at all possible.

  32. apteryx says:

    Words like “quality” and “bias” have technical meanings that are not identical to their meaning in ordinary speech, so perhaps we are talking past each other. If a study includes few patients, it can be said to be “of lower quality” even if it is conducted according to the most careful standards, because such studies are for statistical reasons more likely to generate false positive results than are larger studies. However, they are also more likely to generate false negative results, because when you have only a handful of patients in each arm, the difference in results must be dramatic to attain statistical significance (this is why small studies are referred to as underpowered). This could be said to create biases (in the technical sense) in both directions. In most cases, researchers choose to do small studies because they do not have the many millions of dollars that a large study would require, and they would rather do a small study than no study. The first human trial of a new drug is also small, yet if the Phase I results look adequate, nobody would dream of suggesting that one should not move on to Phase II or III because “the results are probably wrong.” Incidentally, many traditional medicines studied in clinical trials also have animal studies, in addition to extensive human use data, to back up the results. There is no point in ignoring the many animal studies of botanicals.

    Not all reductions of quality, in the common sense, make false positive results more likely. For example, if the test subjects do not all actually have the same disease, this will reduce the chance that an active drug will be statistically superior to placebo. As another example, if the researchers have used low-quality material, or given a low dose, there is little chance of getting a good result. Some of those negative, and therefore “high-quality” and “definitive,” American studies of botanicals have used test materials of unknown composition, refusing to specify chemical composition although bioactive marker compounds are known. If this is done by an MD who has previously warned about the problem of “nonstandardized, variable herbs,” it represents not just low quality but deliberately low quality. Or they may give very low doses; the dose in one “high-quality” echinacea study was 30% of the recommended dose – and, believe it or not, the authors admitted that their product contained no echinacoside, an active compound which is supposed to be present in that species at 1% dry weight or better. In the commonplace usage of the word, this looks like “bias,” and it is bias that virtually guarantees not a false positive but a negative result – which folks who are hostile either to all traditional medicine or indeed to all non-American science can latch on to and declare to be the final word on the subject. I think most readers will be discerning enough to see that these issues are more complex than you believe and require further investigation, rather than simply accepting simple dichotomies such as “SJW studies worthless; Prozac studies unquestionable.”

  33. pec says:

    “You need to compare the difference in number of months for a given survival rate. In other words, horizontal differences between curves, not vertical distances.”

    The percentage still alive at 5 years is greater for insured than uninsured. And this difference is smallest for stage one.

    In general, the charts and data for this article, are confusing and misleading. They obviously want you to interpret it a certain way.

  34. fls says:

    I am using quality and bias in reference to their technical meanings, since we are having a technical discussion about medical research. When I talk about CAM studies being of low-quality, I am talking about those characteristics that lead to positive results (lack of blinding, lack of or inadequate randomization, high dropout rates) in addition to issues about power. Low power increases the percentage of false-positives because it reduces your ability to find true-positives (as you pointed out). If your studies are well-performed, low power has only the effect of reducing the number of positives, since well-performed studies produce few false-positives. However, if your studies are poorly performed, as the bulk of CAM research is, it means that those fewer true-positives are now hopelessly lost in the morass of false-positives.

    Animal studies can be useful if properly performed. The extensive human use data is not useful except in those rare situations where an effect is dramatic and immediate (such as vomiting or dramatic pain relief), as positive results are so easily generated through traditional means of study that the body of information consists almost entirely of false-positive and false-negative results, with no good way to distinguish the few true-positives and negatives. The story of the discovery of an effective malaria drug using Chinese herbal medicine as a source of inspiration is a good example. Out of 200 herbal medicines backed by “extensive human use data” as effective in the treatment of malaria, only one was found to be effective when all were subject to rigorous testing. And traditional use of that one had not distinguished it as somehow different from all the rest beforehand.

    Add to that information the results from several years of performing better quality studies on CAM therapies. The good to high-quality studies have all been negative. Surely if it was simply a matter of prior studies being well-performed, but underpowered, additional well-performed but adequately powered studies should be more likely to return positive results, rather than less likely.

    Yes, I realize that there are always excuses available to try to salvage the results. And some of these excuses are even reasonable enough to warrant further study taking them into account. But without unlimited resources, surely there comes a point where the expected rate of return becomes so low that it is not unreasonable to suggest the resources could be better used elsewhere.

    Linda

  35. fls says:

    I should add that technically bias is the “combination of various design, data, analysis, and presentation factors that tend to produce [significant] research findings when they should not be produced.”

    The suggestion that the negative research is a result of bias is not born out when you look at all the negative research which has been presented as positive by the researchers. If these researchers were really determined to publish negative results (a foolish desire to begin with, since negative studies are less likely to be published, so this strategy would have a detrimental effect on their CV), surely they would have taken advantage of more of the opportunities to do so.

    Linda

  36. apteryx says:

    Linda, this will be my last post on this thread, as I don’t think further discussion between the two of us will have any value for either. You are welcome to have the last word if you like. I will just make a few points for other readers:

    1. While it is probably true that almost all studies on homeopathy are low-quality (common usage), this is not true of studies on botanicals. One study found that average methodological quality was higher in a group of CAM studies than in a group of conventional-pharma studies. You can speculate that they somehow chose journals to manipulate this result, but the facts are still that there are CAM studies that are as good as or better than many of the conventional medicine studies that get published. It is as irrational to reject all CAM studies because some are low quality, as it would be to reject all studies of Western medicine because some have phonied up their data. BTW, in my opinion, eventually we must accept or develop methodology to deal with the fact that it is really difficult to put massage or chicken soup in an opaque capsule. CAM and TM treatments that can’t be studied by the double-blind placebo-controlled method are sometimes pretended to be worthless just for that reason, whereas the same standards are not applied to Western methods (how many would have the nerve to do double-blind studies of coronary bypass?).

    2. It is simply false to say that ALL good- to high-quality studies have been negative. This amounts to a declaration that as soon as a study gets positive results, it automatically becomes low-quality. Maybe it wasn’t blinded! Maybe it wasn’t randomized! In fact, many positive studies have been, and the fact would never have been questioned if only their results had been negative. Most people aren’t in a position to go and check the papers for themselves, so it’s very easy for ideologues to make sweeping accusations and hope to create FUD (fear, uncertainty, and doubt).

    3. It’s highly unlikely that traditional medicines “almost entirely” lack real effects, when less biased (common usage) estimates are that up to 50% are active, varying depending upon use category. (The effect of a febrifuge, laxative, or analgesic is easily observed; the effect of a snakebite remedy is often psychological, as most patients would survive anyway.) Indeed, a great many single-compound drugs have come from plants known to be medicinal – morphine, aspirin, quinine, ephedrine, pilocarpine, cocaine, menthol, podophyllotoxin, artemisinin, digitalis, anthraquinone laxatives, etc., not to mention commercial derivatives of such compounds. If our poor stupid ancestors (and the stupid poor nonwhites of today, 80% of whom have little or no access to Western medicine) simply picked plants at random for every indication, it’s quite extraordinary how many of those plants turn out to contain molecules that, if packaged by corporations, are useful for a similar purpose. It’s also interesting how often multiple groups of these stupid people happened to independently randomly select the same species or related species for the same use.

    4. It is not true that 199 widely used antimalarial remedies have been subjected to “rigorous testing” and found ineffective. Many antimalarial plants have shown antiplasmodial activity in in vitro studies, which of course offer plenty of opportunity for both false positive and false negative results. Almost none of these remedies have ever had a human clinical trial; very few have had even animal studies or a formal human observational study. Perhaps Linda means that the in vitro screening has not (so far) led to a single-compound magic-bullet drug. The traditional remedies probably would not be as potent as single-compound drugs, but they probably are better than nothing for the many people for whom $3 artemesunate is cost-prohibitive.

    5. Linda can’t seem to imagine any reason why an American MD would actively want to soil his CV with a negative CAM study. Well, ideological bias might be one reason, either by the researchers themselves or by their colleagues, many of whom would be far more hostile to a positive result than to a negative result, as we have seen here. Then there is the profit motive. Take, for example, a recent botanical study funded and at least partly designed by a drug company, which used uncharacterized test materials, actively tried to convince patients that the herb was worthless, and apparently structured the study to minimize efficacy – and which nevertheless found an impressive positive trend and one significant result, which had to be twisted heavily to become a media report of “proven worthless.” (Some of you may recognize to which study I refer.) Do you, the reader, not think that the company paid those researchers to do that study? Do you not think that some of them may get more funding to do larger drug trials for the company in future, or may be included among the “opinion leaders” who are sent to conferences at nice resorts to, e.g., push the company’s competing product to their fellows? Do you think they would have enjoyed equal rewards if the study had found and announced that the botanical was undeniably effective?

  37. Harriet Hall says:

    I’ve been away from the Internet for a week, so am coming late to this discussion. I’d just like to make a couple of points.

    apteryx referred to glucosamine reducing joint space narrowing in knee osteoarthritis. I don’t think that research is generally accepted. It was flawed – I believe the measurement method was questioned. It’s been a long time since I read the critiques; maybe a reader can fill us in on the details. At any rate, I think it is accurate to say that it has not been established to the satisfaction of good science that G or G and C slows joint deterioration.

    Careful readers will note that I did not say G and C “don’t” work. I said, “The existing evidence is compatible with the hypothesis that glucosamine and chondroitin work no better than placebo, and that the trials that seem to show otherwise are flawed for various reasons.”

    As for veterinary uses, I know a skeptical veterinarian who is familiar with all the veterinary literature and is not convinced that there’s any more substance in the animal studies than in the human studies.

    I find it curious that commenters have tried to explain a possible mechanism for the idea that G and C might work for more severe pain but not for milder pain. And yet no one has commented on Dr. Sampson’s assertion that the amount of G and C in the supplements are so minuscule compared to the amount already in the body that it is practically inconceivable that they “could” work.

    There’s no point in trying to explain a phenomenon before you have established that the phenomenon exists. I’m quite willing to consider possible mechanisms later, but first I would need some convincing evidence that G and C are really effective. With the existing data and the unlikely rationale, I think there are better ways to spend our limited research dollars.

  38. fls says:

    Apteryx,

    I did not suggest that CAM studies be rejected, or that higher quality CAM studies don’t exist. I agree that it would be irrational to reject all CAM studies, which is probably why I never suggested this in the first place.

    I’m simply suggesting that CAM studies be evaluated in the same way that we evaluate other medical studies when it comes to evidence or science based medicine. That those few CAM studies that make the cut are comparable to other medical studies is to be expected. What is of concern is that so few do, and for many fields there are none. For example, a recent review by Guo et. al. found only 3 randomized controlled trials in all of the research on individualized herbal medicine.

    http://pmj.bmj.com/cgi/content/full/83/984/633#PGM839840633F01

    The excuse that good placebos are hard to find does not explain why researchers do not randomize, analyze by intention-to-treat, fail to mention drop-outs, or take advantage of the easy to make placebo controls.

    It’s not that good-quality CAM research doesn’t exist, it’s that good-quality CAM research consistently produces negative results. The findings from lower-quality studies are contradicted, rather than confirmed, by higher-quality studies. If you think it is false to say that high-quality studies are negative, then provide some references for high-quality positive studies. I haven’t found any and people who have looked a lot harder than me (such as Barker Bausell – criteria: randomized assignment to placebo, at least 50 participants, dropout rate less than 25 percent, published in a high-quality journal since 2000 (after the CONSORT statement) (dropping that requirement to a reputable American journal still returned negative studies)) haven’t found them either. I’m not trying to be unreasonable. It doesn’t matter to me the original source of an effective treatment. I just like to have some confidence in the advice that I give.

    Of course medical research has discovered that botanicals and other natural sources contain active ingredients. The bulk of our pharmaceuticals come from natural sources. As you mention, those plants with obvious medical uses have become part of our established pharamceutical regimen. And medical research continues to test for activity among these sources. The question is, now that the obvious stuff has already been picked out, whether looking to traditional uses adds to the specificity of that search. And while you mention that occasionally we confirm a traditional use, other traditional uses are not confirmed plus traditional use has missed important effects. It is becoming increasingly clear that we cannot discover beforehand which of the 20 listed traditional uses of a plant represents a real effect, nor whether the absence of a specific use on that list means that a real effect is absent for that condition. Since medical research into botanicals is ongoing, what value does coming at the question from a CAM perspective really add (not a rhetorical question)?

    I simply find it hard to believe that the reason the high-quality research is producing negative results is due to systematic effort to enact, if not outright fraud, at least a strong bias. I could believe it of individuals, but many of the researchers presenting these reports seem sympathetic to CAM (based on sources of funding and in their presentation of the results as well as the conclusions they draw). The NCCAM, which is a major source of funding, seems sympathetic to CAM. It seems that rewards are available for those on both sides of the debate.

    Linda

  39. David Gorski says:

    We’re not saying that all CAM studies should be rejected or ignored. For instance, studies of herbal remedies are certainly warranted because herbs may harbor new drugs. It is the incredibly, ridiculously improbable (such as homeopathy) that I, at least, have a problem with.

  40. BlazingDragon says:

    Dr. Gorski,

    The problem I’ve seen is doctors are too willing to declare they “know” what has a plausible mechanism and what does not have a plausible mechanism, based on their “intuition.” This often causes doctors to miss important early signs of a more serious disease, especially if the presentation is atypical.

    A little more respect for patients when they tell a doctor a vague symptom would help a lot.

    Doctors also seem to get “married” to a hypothesis and won’t change their mind, even when the evidence their hypothesis was flat-out wrong piles up.

    Doctors (in general) also seem to not understand Gaussian distributions (somewhat tongue-in-cheek). It is hard to get a diagnosis these days unless your symptoms are “average.” This is especially tough for people who don’t seem to be close to the average (like myself) for most things. Doctors cannot entertain the hypothesis that they may be looking at a 1 in 100 (or even 1 in 1000) case sitting in front of them because “the odds are against it.”

    Thanks again for the blog (and the entries). I do enjoy them.

  41. art malernee dvm says:

    here is a study to give arthritic pet owners spending money on nutraceuticals.
    art malernee dvm

    “Clinical evaluation of a nutraceutical, carprofen and meloxicam for the treatment of dogs with osteoarthritis.

    Moreau M, Dupuis J, Bonneau NH, Desnoyers M.

    The efficacy, tolerance and ease of administration of a nutraceutical, carprofen or meloxicam were evaluated in a prospective, double-blind study on 71 dogs with osteoarthritis. The client-owned dogs were randomly assigned to one of the three treatments or to a placebo control group. The influence of osteoarthritis on the dogs’ gait was described by comparing the ground reaction forces of the arthritic dogs and 10 normal dogs. Before the treatments began, and 30 and 60 days later, measurements were made of haematological and biochemical variables and of the ground reaction forces of the arthritic limb, and subjective assessments were made by the owners and by the orthopaedic surgeons. Changes in the ground reaction forces were specific to the arthritic joint, and were significantly improved by carprofen and meloxicam but not by the nutraceutical; the values returned to normal only with meloxicam. The orthopaedic surgeons assessed that there had been an improvement with carprofen and meloxicam, but the owners considered that there had been an improvement only with meloxicam. The blood and faecal analyses did not reveal any changes. The treatments were well tolerated, except for a case of hepatopathy in a dog treated with carprofen.”

  42. apteryx says:

    And here’s another to give owners of arthritic pets:

    McCarthy et al. 2007, Vet. J. 174: 54-61.

    “Randomised double-blind, positive-controlled trial to assess the efficacy of glucosamine/chondroitin sulfate for the treatment of dogs with osteoarthritis.”

    “Thirty-five dogs were included in a randomised, double-blind, positive controlled, multi-centre trial to assess the efficacy of an orally-administered glucosamine hydrochloride and chondroitin sulfate (Glu/CS) combination for the treatment of confirmed osteoarthritis of hips or elbows. Carprofen was used as a positive control. Dogs were re-examined on days 14, 42 and 70 after initiation of treatment. Medication was then withdrawn and dogs were re-assessed on day 98. Response to treatment was based on subjective evaluation by participating veterinarians who recorded their findings at each visit. Dogs treated with Glu/CS showed statistically significant improvements in scores for pain, weight-bearing and severity of the condition by day 70 (P<0.001). Onset of significant response was slower for Glu/CS than for carprofen-treated dogs. The results show that Glu/CS has a positive clinical effect in dogs with osteoarthritis.”

    I’ve previously seen the abstract cited by Dr. Malernee and found the language kind of strange. The authors are happy to specify the two pharmaceutical treatment arms included, but the third is only A nutraceutical? What nutraceutical? Was this a branded multicomponent product? Containing what? It must have had at least a sprinkle of glucosamine or I wouldn’t find this abstract in a PubMed search for “glucosamine,” but if it was a pure glucosamine or G/C product, I don’t know why the authors wouldn’t simply have said so. Therefore, lacking access to the journal in question, I presume it was not.

  43. art malernee dvm says:

    Was this a branded multicomponent product? Containing what? >>>

    The nutraceutical in the study is (chondroitin sulfate, glucosamine hydrochloride, manganese ascorbate: CSGM)
    not sure if it was branded.
    art malernee dvm

  44. apteryx says:

    There is a commonly held opinion that glucosamine hydrochloride is less active, and glucosamine sulfate is to be preferred. I don’t know how much solid evidence there is to support that, but would personally be inclined to pick the better-reputed variety, either for a research study or for use on my own beloved furry pet (or self).

  45. Apteryx syggests giving the owners of arthritic pets a copy of a study that says, “Response to treatment was based on subjective evaluation by participating veterinarians who recorded their findings at each visit.” If I asked a veterinarian or doctor practicing human medicine to show me the evidence he based his treatment recommendation upon and he gave me a study stating that the reported response was based on subjective evaluations, I’d never go back to see him again much less take the product he recommended myself or give it to my dogs or cats.

    Rosemary

  46. art malernee dvm says:

    Sad to say,I sold animal branded nutraceuticals at one time after reading a translation of a German study showing randomized radiographic improvement. I stopped when i found out the study was done by those who had an interest in marketing nutraceuticals and the attempts to repeat the study by independent researchers failed. I think Nutraceuticals currently are sold by most DVM offices. There are animal only brands but competition has kept the cost down for them so that most sell for a lot less than 75 USA dollars. Competition with animal branded nutraceuticals may be one reason why many vets are starting to sell human not animal branded ones. Vets can buy and sell even FDA approved human drugs and use them for pets if they want to in the USA.
    art malernee dvm

  47. I think the practice of veterinary medicine is as far away as you can get from scientific medicine and still stay in the fold, and I don’t know of any way to change that since very few people can afford to pay for hi-tech medical treatment for pets. As a result, the vast majority of DVMs has to treat different species and practice many specialties all at once, specialties as diverse as surgery, internal medicine, pediatrics even dentistry. How can anyone keep current with all of them?

    Add to that the huge marketing efforts of the billion $$ dietary supplement industry intent upon “growing their business” and grabbing a big market share of the money spent on animal meds and pet food and it is amazing how many veterinarians actually work hard at keeping up with and practicing evidence based medicine.

    My introduction to “alt. med” came when I saw alt. med. promotional material masquerading as pet “magazines” selling every form of snake oil imaginable to the unsuspecting public. When I found an “article” promoting a supplement that had injured me, a supplement that scientists had long known was worthless and dangerous and sent a letter to the editor which they refused to publish, I concluded that the article hadn’t been written and published by mistake. It was fraud pure and simple. That was over 10 years ago. I haven’t seen anything since then that has caused me to change my mind about the fraud.

    Rosemary

  48. Aptryx wrote: “rjstan – These are the two arguments together used to reject all evidence in favor of dietary supplements: the evidence is never enough, and if it were, it would only mean that the product should be used as the basis for a synthetic, standardized [patented, prescription-only] single-compound drug….To answer your other question, there are animal studies showing benefit of both glucosamine and combination products. Search on PubMed…”

    First aptryx I apologize that I just saw your above statement. I find it difficult to follow the comments in the format used here. Perhaps you missed my response to you about chamomile tea. If you had seen it, you would realize that I do not claim that a whole bontanical cannot be well or adequately studied scientifically.

    I assume that when you state that one of the arguments put forth to “reject all evidence in favor of dietary supplements” is that “the evidence is never enough”, you mean that there are medical scientists with closed minds who will never be convinced by any amount of solid evidence of something that they don’t want to believe.

    While there may be some people who fall into that category, you would be wrong to include me. However, I suspect based on those of your posts which I have seen that it would take a lot more evidence to convince me of the safety and efficacy of any substance be it a whole botanical or a synthetic drug than it would you.

    I for one would never believe that the probability of a drug or therapy being safe and efffective was high without a large body of evidence that consistently gave the same results. With a botanical that evidence would have to be obtained with the same standarized product. I can’t tell off hand how much evidence it would take to convince me, but I can tell you that it is far higher than what anyone promoting or selling a supplement has ever shown me when I requested such information from them. I can also tell you that for a treatment for something like osteoarthritis which I myself have and which comes and goes for unknown reasons, I would require a lot more evidence than with something like a treatment for something like cancer, a disease that almost never resolves without treatment.

    Rosemary

  49. When considering “dietary supplements” and all that is now labeled “alt med.”, there is a lot of evidence to consider in addition to the scientific. Those of us who have been following the industry since it took off right after the passage of DSHEA, are accutely aware of the fraud the industry is based on, the fraud the industry is full of. I’m using the legal definition, the one used by state and federal prosecutors, the FTC not the FDA. I won’t attempt to go into it here. There are books on the topic. Check Quackwatch. I’m trying to write another right now based on my investigations.

  50. wanderingprimate says:

    rjstan
    “I think the practice of veterinary medicine is as far away as you can get from scientific medicine and still stay in the fold…”

    On the contrary…You would be surprised with the science and evidence based foundations of many veterinarians in and out of academia (General practitioners as well as specialtists have made a little more progress since the days of James Herriot).

    To the degree that alternative modalities (and other market pressures) influence this area of medicine- it reflects a lot of what’s going on in general (science vs non-science in all medicine). In spite of its problems, veterinary medicine remains fairly resilient to woo.

    On the other hand, it is a looming issue…

Comments are closed.