Articles

Of SBM and EBM Redux. Part II: Is it a Good Idea to test Highly Implausible Health Claims?

Review

This is the second post in a series* prompted by an essay by statistician Stephen Simon, who argued that Evidence-Based Medicine (EBM) is not lacking in the ways that we at Science-Based Medicine have argued. David Gorski responded here, and Prof. Simon responded to Dr. Gorski here. Between that response and the comments following Dr. Gorski’s post it became clear to me that a new round of discussion would be worth the effort.

Part I of this series provided ample evidence for EBM’s “scientific blind spot”: the EBM Levels of Evidence scheme and EBM’s most conspicuous exponents consistently fail to consider all of the evidence relevant to efficacy claims, choosing instead to rely almost exclusively on randomized, controlled trials (RCTs). The several quoted Cochrane abstracts, regarding homeopathy and Laetrile, suggest that in the EBM lexicon, “evidence” and “RCTs” are almost synonymous. Yet basic science or preliminary clinical studies provide evidence sufficient to refute some health claims (e.g., homeopathy and Laetrile), particularly those emanating from the social movement known by the euphemism “CAM.”

It’s remarkable to consider just how unremarkable that last sentence ought to be. EBM’s founders understood the proper role of the rigorous clinical trial: to be the final arbiter of any claim that had already demonstrated promise by all other criteria—basic science, animal studies, legitimate case series, small controlled trials, “expert opinion,” whatever (but not inexpert opinion). EBM’s founders knew that such pieces of evidence, promising though they may be, are insufficient because they “routinely lead to false positive conclusions about efficacy.” They must have assumed, even if they felt no need to articulate it, that claims lacking such promise were not part of the discussion. Nevertheless, the obvious point was somehow lost in the subsequent formalization of EBM methods, and seems to have been entirely forgotten just when it ought to have resurfaced: during the conception of the Center for Evidence-Based Medicine’s Introduction to Evidence-Based Complementary Medicine.

Thus, in 2000, the American Heart Journal (AHJ) could publish an unchallenged editorial arguing that Na2EDTA chelation “therapy” could not be ruled out as efficacious for atherosclerotic cardiovascular disease because it hadn’t yet been subjected to any large RCTs—never mind that there had been several small ones, and abundant additional evidence from basic science, case studies, and legal documents, all demonstrating that the treatment is both useless and dangerous. The well-powered RCT had somehow been transformed, for practical purposes, from the final arbiter of efficacy to the only arbiter. If preliminary evidence was no longer to have practical consequences, why bother with it at all? This was surely an example of what Prof. Simon calls “Poorly Implemented Evidence Based Medicine,” but one that was also implemented by the very EBM experts who ought to have recognized the fallacy.

There will be more evidence for these assertions as we proceed, but the main thrust of Part II is to begin to respond to this statement from Prof. Simon: “There is some societal value in testing therapies that are in wide use, even though there is no scientifically valid reason to believe that those therapies work.”

Some such Testing is Useful (and Fun)…

First, let me say that I am not opposed to all trials pertaining to such methods (not “therapies,” which begs the question), assuming that the risks to subjects are minimal, the funding is not public, and the study is honest and ethical in every respect. For example, I’m happy that studies have been done looking at interexaminer reliability of practitioners who claim to detect ‘craniosacral rhythms’ (there is none), at the ability of ‘therapeutic touch’ practitioners to detect the ‘human energy field’ when denied visual cues (they can’t), or whether ‘provers’ can distinguish between an ‘ultramolecular’ homeopathic preparation and a ‘placebo’ (they can’t).

Those sorts of trials are small, cheap, paid for with private money, often test the claimants themselves (on whom the onus of proof belongs), have minimal risk of harm or discomfort, and each hypothesis tested is a sine qua non of a larger therapeutic claim. Such tests are simpler and less bias- and error-prone than are efficacy trials of the corresponding claims, and are sufficient to reject those claims. Yet EBM typically ignores such research when reviewing efficacy claims, as exemplified by the Cochrane homeopathy abstracts quoted in Part I and by a Cochrane review of “touch therapies.” In the case of homeopathy, there are several other testable hypotheses that, when tested, have also disconfirmed the larger claim. Why aren’t they cited in EBM-style reviews?

Here I must give Prof. Simon some credit. In his most recent discussion of EBM vs. SBM he wrote the following:

Now part of me says things like, no funding of research into therapeutic touch until someone can replicate the Emily Rosa experiment and show different results than Ms. Rosa did. So I’m kind of split on this issue.

It was the Emily Rosa experiment that demonstrated that ‘therapeutic touch’ practitioners could not detect the ‘human energy field’ when denied visual cues. Thus in some ways Prof. Simon and I are not that far apart, although I’m not at all split on the issue. I’ll discuss more of this in Part III.

…But Efficacy Trials are Not

Regarding publicly funded efficacy trials of implausible claims, my responses are several, including those that Dr. Gorski has already discussed: such studies don’t convince true believers, they are frequently unethical and even dangerous, and they waste research funds. Prof. Simon counters that to have societal value, studies needn’t convince true believers, only fence-sitters (true but irrelevant—see below), and that the public money spent is such a small portion of the entire health care bill that it makes little difference—but here he stumbles a bit:

Money spent on health care is a big, big pot of money and the money spent on research is peanuts by comparison. If we spend some research money to help insure that the big pot of money is spent well, we have been good stewards of the limited research moneys.

The issue, of course, is whether or not the research money is well spent. In the case of efficacy trials of methods that lack scientific bases, the money is never well spent. The same people who would be convinced by such trials ought to be convinced by NIH scientists simply explaining to them, in a definitive way, that there is no scientifically valid reason for those methods to work, or that the methods have already been disproved by other investigations, including the types of trials just mentioned. If such statements are not convincing, why not? Remember, fence-sitters are not true believers or anti-intellectual, conspiracy-theory-laden, anti-fluoride, pro-Laetrile, pro-chelation, anti-vax paranoiacs. If they were, they wouldn’t be convinced by trials either, would they?

To explain why otherwise reasonable people might not be convinced by definitive statements based on science, we need look no further than EBM’s own scientific blind spot, as perfectly exemplified by Dr. Ernst’s words in his 2003 debate with Cees Renckens, quoted in Part I:

In the context of EBM, a priori plausibility has become less and less important. The aim of EBM is to establish whether a treatment works, not how it works or how plausible it is that it may work. The main tool for finding out is the RCT…

Ironically, it may be that those at most risk for being unconvinced by science are physicians themselves—thanks to EBM. What follows is the passage that I promised at the end of Part I. It illustrates just how elusive clear thinking can be, even for very intelligent people, after they’ve been steeped in EBM. Originally posted here, it also introduces the next reason that we should, er, look askance at calls for efficacy trials of implausible claims:

Failing to consider Prior Probability leads to Unethical Human Studies

An example…was the regrettable decision of two academic pediatricians, one of whom is a nationally-recognized expert in pediatric infectious disease, to become co-investigators in an uncontrolled trial of homeopathic “remedies” for acute otitis media (AOM) in 24 children, ages 8 mos-77 mos. The treating investigators were homeopaths. The report provided evidence that 16 of the children had persistent or recurrent symptoms, lasting from several days to 4 weeks after diagnosis. Nevertheless, no child was prescribed analgesics or anti-pyretics, and only one child was eventually prescribed an antibiotic by an investigator (another in an emergency room). There is no evidence that the investigators evaluated any of the subjects for complications of AOM, nor did the two academic pediatricians “interact with any research subjects.” Similar examples are not hard to find.

Funny thing about EBM’s tenacious hold on medical academics: a few years ago, when I first noticed the report just described, I ran it by a friend who is the chief of pediatrics at a Boston area hospital and a well-known academic pediatrician in his own right. After I explained the tenets of homeopathy, he agreed that it is highly implausible. At that point I expected him to agree that the trial had been unethical. Instead he queried, “but isn’t its being unproven just the reason that they should be allowed to study it?” There was no convincing him.

Such faith in clinical trials as absolute, objective arbiters of truth about claims that contradict established knowledge raises another point that will have to wait for Part III: RCTs are not objective arbiters in such cases, but rather tend to confuse more than clarify. For now, let’s continue to look at…

Human Studies Ethics

A “Clinically Competent Medical Person”

That homeopaths were accepted as the sole, treating clinician/investigators in the trial just mentioned should make any IRB member’s eyebrows raise. According to the Helsinki Declaration,

Medical research involving human subjects should be conducted only by scientifically qualified persons and under the supervision of a clinically competent medical person. The responsibility for the human subject must always rest with a medically qualified person and never rest on the subject of the research, even though the subject has given consent.

The physician may combine medical research with medical care, only to the extent that the research is justified by its potential prophylactic, diagnostic or therapeutic value. When medical research is combined with medical care, additional standards apply to protect the patients who are research subjects.

Dr. Gorski mentioned two other trials that I’ve written extensively about, the Gonzalez trial for cancer of the pancreas and the ongoing Trial to Assess Chelation Therapy (TACT). Each of those claims had a miniscule prior probability, but proponents justified each by “popularity” and by appeals to EBM such as the AHJ editorial quoted above. Each trial involved clinically incompetent investigators chosen by the NIH: Gonzalez himself in the former and numerous chelationists in the latter, most of whom are members of the organizations described here, and many of whom have been subjected to actions by state medical boards, federal civil settlements, or criminal convictions. Predictably, the Gonzalez trial involved unnecessary torture of human subjects, and the TACT has involved unnecessary deaths.

Below are quotations from a post that subjected the Gonzalez trial to ethical scrutiny; most of the arguments apply to implausible claims in general. For the purposes of this post I’ll provide new topic headings and a few comments.

Informed Consent and Clinical Equipoise

In 2003, using the Gonzalez regimen as an example, I argued that the information offered to prospective subjects of trials of implausible claims is likely to be misleading:

Plausibility also figures in informed consent language and subject selection. How many subjects who are not wedded to “alternative medicine” would be likely to join a study that independent reviewers rate as unlikely to yield any useful results, or in which the risks are stated to outweigh the potential benefits? Are informed consents for such studies honest? In at least one case cited in the following paragraph, the answer is “no.” Nor may subjects who prefer “alternative” methods be preferentially chosen for such research even if they seek this, because “fair subject selection requires that the scientific goals of the study, not vulnerability, privilege, or other factors unrelated to the purposes of the research, be the primary basis for determining the groups and individuals that will be recruited and enrolled” (Emanuel et al. 2000).

The Office for Human Research Protections recently cited Columbia University for failure to describe serious risks on the consent form of its “Gonzalez” protocol for cancer of the pancreas, funded by the NCCAM (OHRP 2002). The study proposes to compare the arduous “Gonzalez” method, which is devoid of biological rationale, to gemcitabine, an agent acknowledged by the investigators to effect “a slight prolongation of life and a significant improvement in . . . quality of life.” Nevertheless, a letter from Columbia to prospective subjects states, “it is not known at the present time which treatment approach is best [sic] overall” (Chabot 1999). The claim of clinical equipoise, or uncertainty in the expert medical community over which treatment is superior—necessary to render a comparison trial ethical—is not supported by the facts (Freedman 1987).

The consent forms for both the TACT and the homeopathy trial mentioned above were also uninformative or worse. For my comments on the former, look here under “Comments on the TACT Consent Form”; for the homeopathy trial’s consent form, look here and reach your own conclusions (hint: there is no mention of the risks of omitting standard treatments for acute otitis media).

Ms. Gurney’s article [about a friend who submitted himself to the Gonzalez trial] provides additional, compelling evidence that the Gonzalez protocol did not meet the standard of clinical equipoise:

…at ASCO, I learned quickly and definitively that the Gonzalez protocol was a fraud; no mainstream doctors believed it was anything else and they were surprised that anyone with education would be on it.

The “mainstream doctors” of the American Society of Clinical Oncology must be judged representatives of the pertinent “expert medical community.”

The TACT also violates the principle of clinical equipoise, even as it claims to do otherwise, as discussed here under “‘Clinical Equipoise’ and the Balance for Risks and Benefits.”

Science and Ethics

There is a consensus, among those who consider human studies ethics, that a study must be scientifically sound in order to be ethical. According to the Council for International Organizations of Medical Sciences. International Ethical Guidelines for Biomedical Research Involving Human Subjects (CIOMS; Geneva, Switzerland:1993. Quoted here):

Scientifically unsound research on human subjects is ipso facto unethical in that it may expose subjects to risks or inconvenience to no purpose.

The Helsinki Declaration agrees:

Medical research involving human subjects must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and on adequate laboratory and, where appropriate, animal experimentation.

There is no body of basic science or animal experimentation that supports the claims of Gonzalez.

Emanuel and colleagues, writing in JAMA in 2000, asserted:

Examples of researchthat would not be socially or scientifically valuable includeclinical research with…a triflinghypothesis…

I assert that highly implausible claims ought to be viewed as “trifling hypotheses.”

The Fallacy of Popularity

Virtually all of the research agenda of the NCCAM has been justified by the assertion that implausible claims that are popular require research, merely because people are using them. Referring to the opinions of the late NCCAM Director Stephen Straus, Science Magazine wrote in 2000:

Scientific rigor is sorely needed in this enormously popular but largely unscrutinized field….Most of these substances and treatments have not been tested for either safety or efficacy.

As surprising as it may be to some, however, a method’s popularity may not supersede the interests of individual trial subjects. According to the Helsinki Declaration:

In medical research on human subjects, considerations related to the well-being of the human subject should take precedence over the interests of science and society.

The Belmont Report agrees:

Risks and benefits of research may affect the individual subjects, the families of the individual subjects, and society at large (or special groups of subjects in society). Previous codes and Federal regulations have required that risks to subjects be outweighed by the sum of both the anticipated benefit to the subject, if any, and the anticipated benefit to society in the form of knowledge to be gained from the research. In balancing these different elements, the risks and benefits affecting the immediate research subject will normally carry special weight.

The U.S. Code of Federal Regulations is unequivocal:

The IRB should not consider possible long-range effects of applying knowledge gained in the research (for example, the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility. (CFR §46.111)

“Popularity” is a Ruse

In addition to the ethical fallacy just discussed, there is another fallacy having to do with popularity: the methods in question aren’t very popular. In the medical literature, the typical article about an implausible health claim begins with the irrelevant and erroneous assertion that “34%” or “40%” or even “62%” (if you count prayer!) of Americans use ‘CAM’ each year. This is irrelevant because at issue is the claim in question, not ‘CAM’ in general. It is erroneous because ‘CAM’ in general is so vaguely defined that its imputed popularity has been inflated to the point of absurdity, as exemplified by the NCCAM’s attempt, in 2002, to include prayer (which it quietly dropped from the subsequent, 2007 survey results).

It is erroneous also because it fails to distinguish between such different issues as consulting a practitioner and casually purchasing a vitamin pill at the supermarket, or between Weight Watchers and the pseudoscientific “blood type diet,” and much more. It is erroneous also because it fails to distinguish between occasional and frequent use or between rational use and flimflam (vitamins for deficiency states vs. vitamins to shrink tumors; visualization for anxiety vs. visualization to shrink tumors).

Most of the ‘CAM’ claims for which people consult practitioners are fringe methods, each involving, in the most credible survey, less than 1% of the adult population. The slightly more popular exceptions are chiropractic and massage, reported by 3.3% and 2%, respectively, but these numbers also fail to distinguish rational expectations from flimflam (a wish to alleviate back pain or muscle soreness vs. a wish to cure asthma or to remove ‘toxins’). The most recent National Health Interview Survey (NHIS), co-authored by an NCCAM functionary, reported that 8.6% of adults had used “chiropractic or osteopathic manipulation” in the previous 12 months, further confusing the question of chiropractic.

Let’s revisit an example of how the ‘popularity’ gambit has been used to entice scientific reviewers and taxpayers to pony up for regrettable ‘CAM’ research. The aforementioned NCCAM/NHLBI-sponsored Trial to Assess Chelation Therapy for coronary artery disease (TACT), which at $30 million and nearly 2400 subjects was to be the most expensive and largest NIH-sponsored ‘CAM’ trial when it began in 2003, was heralded as follows:

“The public health imperative to undertake a definitive study of chelation therapy is clear. The widespread use of chelation therapy in lieu of established therapies, the lack of adequate prior research to verify its safety and effectiveness, and the overall impact of coronary artery disease convinced NIH that the time is right to launch this rigorous study,” said Stephen E. Straus, M.D., NCCAM Director.

Over 800,000 patient visits were made for chelation therapy in the United States in 1997…

In the application that won him the TACT grant, Dr. Gervasio Lamas, who had also been the author of the American Heart Journal editorial quoted above, used similar language:

2.0 BACKGROUND AND SIGNIFICANCE

2.1 Alternative Medicine and Chelation Therapy in the United States

…A carefully performed national survey, and other more restricted local surveys all find the practice of alternative medicine to be widespread…34% reported using at least one alternative therapy in the last year…Thus alternative medical practices are common, and constitute a significant and generally hidden health care cost for patients.

NCCAM estimated that more than 800,000 visits for chelation therapy were made in the U.S. in 1997…

Sounds impressive, huh? Less so when you know the truth. The NCCAM made no such estimation. It merely accepted, without question, the number given to it by the American College for Advancement in Medicine (ACAM)—a tiny group of quacks who’d been peddling chelation for decades, especially so after their original snake oil-of-choice, Laetrile, had been outlawed. Not mentioned by the NCCAM press release or by Dr. Lamas was that the purported 800,000 chelation visits were for all comers: the ACAM member appointed as TACT “Trial Chelation Consultant” touts chelation for about 70 indications (it’s the One True Cure), so we can only guess how many hapless chelation recipients thought they were being treated for coronary disease.

For the NIH to have chosen ‘visits’ as the units of popularity, moreover, was misleading in itself: each person submitting to chelation typically makes at least 30 biweekly visits followed by indefinite bimonthly visits, so even if the ACAM number had been accurate, about 0.01% of the U.S. adult population underwent the treatment in 1997—a far cry from “34%.”

It’s no surprise, then, that the TACT has dragged on considerably longer than the originally planned 5 years. It hasn’t been able to recruit enough subjects! If you wade through the history of the trial on ClinicalTrials.gov, you’ll find that the expected subject enrollment has dwindled from 2372 to 1700, in spite of the NIH having selected more than 100 “community chelation practices” as study sites, in spite of its having added 22 (originally unplanned) Canadian sites a few years later, and in spite of the trial’s duration having been prolonged by several years.

A brief perusal of the 2002 NHIS data reveals that the NCCAM could have predicted this problem: the survey estimated that 0.0% (sic) of the U.S. adult population had used chelation for any reason in the previous 12 months, based on 10 of 31,000 adults interviewed having answered in the affirmative—a number so small that the extrapolation to the entire population “did not meet standards of reliability or precision.” Do you suppose that Director Straus was aware of the NHIS data when he asserted a “widespread use of chelation therapy”?

Next: Efficacy trials of highly implausible claims don’t work very well.

*The Prior Probability, Bayesian vs. Frequentist Inference, and EBM Series:

1. Homeopathy and Evidence-Based Medicine: Back to the Future Part V

2. Prior Probability: The Dirty Little Secret of “Evidence-Based Alternative Medicine”

3. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued

4. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued Again

5. Yes, Jacqueline: EBM ought to be Synonymous with SBM

6. The 2nd Yale Research Symposium on Complementary and Integrative Medicine. Part II

7. H. Pylori, Plausibility, and Greek Tragedy: the Quirky Case of Dr. John Lykoudis

8. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 1

9. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 2

10. Of SBM and EBM Redux. Part I: Does EBM Undervalue Basic Science and Overvalue RCTs?

11. Of SBM and EBM Redux. Part II: Is it a Good Idea to test Highly Implausible Health Claims?

12. Of SBM and EBM Redux. Part III: Parapsychology is the Role Model for “CAM” Research

13. Of SBM and EBM Redux. Part IV: More Cochrane and a little Bayes

14. Of SBM and EBM Redux. Part IV, Continued: More Cochrane and a little Bayes

15. Cochrane is Starting to ‘Get’ SBM!

16. What is Science? 

Posted in: Chiropractic, Clinical Trials, Energy Medicine, Health Fraud, History, Homeopathy, Medical Academia, Medical Ethics, Naturopathy, Politics and Regulation, Science and Medicine

Leave a Comment (49) ↓

49 thoughts on “Of SBM and EBM Redux. Part II: Is it a Good Idea to test Highly Implausible Health Claims?

  1. Jann Bellamy says:

    Excellent post.

    Let’s see:

    (1.) Government (Congress and state legislatures) licenses quacks, allows them to regulate their education and practices as well as giving them student loans to attend their faux “universities.” Government allows quack products to be sold with false and misleading claims. Government does away with standard of care for medicine.

    (2.) The public visits these legitimized quacks, uses the legitimized quack products, and patronizes MDs offering quack care. Therefore, these practices and products become “popular.”

    (3.) Government decides that because of their “popularity” these practices and products must be tested to see if they actually work as claimed and therefore funds (with taxpayer money) studies.

    (4.) Studies show that, in fact, they do not work.

    (5.) Nothing happens, and government proceeds as described in (1.)

    A fair summary?

  2. JMB says:

    @Jann Belamy

    That’s a fair summary, but don’t stop there.

    (6)Healthcare costs have reached 17.3 % of GDP in the USA, so politicians decide they should have more control of healthcare. Politicians recommend reconsidering whether science based treatments that have been shown to prolong life are worth the high cost, instead of looking at stopping ineffective treatments first.

    @Kimball Atwood

    I’ve been waiting for these posts (I need to catch up on past posts, too). It was worth the wait. Thank you. I look forward to part III.

  3. windriven says:

    @JMB

    “Politicians recommend reconsidering whether science based treatments that have been shown to prolong life are worth the high cost…”

    I suspect that the percentage of healthcare spending on sCAM is relatively small – though seemingly growing at a brisk pace. A society spending nearly a fifth of GDP on healthcare cannot afford practices that don’t work and that may potentiate the cost of treatments that do work by postponing them.

    By the same token our society cannot sustain ever greater strides in medical care that come at ever increasing cost. Why is it that a hysterectomy costs nearly $10k but the cost of spaying my Lab was less than $400? The dog was appropriately monitored, had an IV line and fluids, and received state-of-the-art anesthesia (diprivan + O2/N2O). My vet clinic performs several hundred surgical procedures a year with excellent results.

    I am not suggesting that humans and canines should be treated identically. But the outcomes of human hysterectomies are not 25 times better than canine spays.

  4. Always Curious says:

    That’s a quality study they’ve got there: no controls, 24 patients for 24 treatment groups, no hard outcomes, no data (insignificant as it may be) about the difference in progression & outcomes between different treatment groups. You’d get better results surveying parents of kindergartners about what they do for their kids in that situation. We have low standards indeed if such studies can even be considered in the justification of a full-scale trial.

    In rereading it, I’d say the paper failed to even meet it’s own stated goal:

    “To determine the appropriate study design for clinical randomized trials …”

    And yet their recommendations are:

    “…require strict enrollment criteria, the ability to randomize patients and blind treating physicians to treatments selected, careful definitions of endpoints and criteria for success or failure of therapy and appropriate sample size. Development of protocols will require careful planning, insight into the foundations of homeopathic practice and consideration of the differences between allopathic and homeopathic practice…”

    Doesn’t sound like the recommendations have anything to do with their study, but are instead the general guidelines any trial should have.

  5. pmoran says:

    I still think that all this is a bit Ivory Towerish. Surely there will be instances where on cost/risk/public benefit/what-is-there-to-gain-or-lose? grounds relatively implausible methods may have to be looked at in clinical studies. Look at Laetrile, and the Di Bella treatment in Italy.

    I also see some arguable matters.

    Studies don’t affect usage? Shark cartilage has almost completely dropped off the alt scene presumably partly as the result of a number of negative studies.

    “Argument from popularity” is always bunk? What about the Gonzales study stalling because nearly all subjects preferred to try his treatment rather than accept the somewhat ordinary benefits of chemotherapy for their inoperable pancreatic cancer? What about the at least 40% of cancer sufferers and asthmatics who use alternatives?

    It is not that you are wholly wrong about anything, it is just that there seems to be a slightly different world out here, one where “you should be trusting us –we know what is best” evokes an exactly opposite reaction to the one that you clearly perceive to be our due.

    And as I have said many times, dealing with the ills of CAM is possible only to the extent that we have public trust. The public can’t follow the science and they instantly sniff hypocrisy in arguments from ethical concerns and from the risks of CAM use.

    I can agree that studying most individual alt methods is a waste of time, partly because alt.methods are rarely used alone and results can be challenegd on those grounds. I would prefer studies that look at the generic decision to use alternatives i.e. whether alternative use of any type improves outcomes.

    I tried to get Edzard Ernst interested in a study on patients who decide to refuse refuse conventional treatments of cancer and who also have active measurable cancer. An uncontrolled observational study on even a few dozen of those using “alternative” programs of whatever kind alone and in their native format, would reveal just about all that anyone needs to know about trying to treat active cancer this way.

  6. ConspicuousCarl says:

    pmoran on 10 Dec 2010 at 5:24 pm
    I tried to get Edzard Ernst interested in a study on patients who decide to refuse refuse conventional treatments of cancer and who also have active measurable cancer. An uncontrolled observational study on even a few dozen of those using “alternative” programs of whatever kind alone and in their native format, would reveal just about all that anyone needs to know about trying to treat active cancer this way.

    That would make sense, but I wonder how many normal people are unconvinced by actual clinical evidence because they believe the detailed “explanations” offered by the hucksters.

    Even though things like chiropractic and homeopathy have no “evidence”, there is a lot of fake science offered to support them. John Benneth sounds like a nutball to anyone who knows better, but maybe it is convincing enough to his victims that they are “skeptical” of the actual tests.

  7. Werdna says:

    It occurs to me that if in general our null hypothesis (H0) is “nothing’s happening” and our beta is the usual 20%. Then doing a RCT under those conditions is clearly a mistake when our a priori P(H0) < 20%. Granted a priori probability can be difficult and sometimes impossible (or infeasible) to bound. Taking homeopathy for example – you don’t have to make too many assumptions to come up with the fact that it has a favorable outcome over tap water is zero.

    Ergo, these are the cases where Prof. Simon is categorically wrong. As we would NOT be even working in the public interest as there is a far better chance we would provide false information about homeopathy than true.

  8. Werdna says:

    Sorry – that last line is incorrect.

  9. pmoran says:

    That would make sense, but I wonder how many normal people are unconvinced by actual clinical evidence because they believe the detailed “explanations” offered by the hucksters

    The “explanations” are a needed gloss, but cancer quackery is sustained by the personal testimonial.

    The study I suggest would help counter testimonial by demonstrating what usually happens whenever anyone tries to cure active cancer with “alternatives”. This is what the wavering public needs to know.

    I don’t underestimate the difficulties in having such a study done (at least on a prospective basis), but it should be. There is a vast amount of information of this type being lost because no one is looking at it. Alt.med has no incentive to.

  10. BillyJoe says:

    windriven,

    Why is it that a hysterectomy costs nearly $10k but the cost of spaying my Lab was less than $400?

    That’s not a fair comparison.

    If your lab is a male the proper comparison is for a vasectomy.
    A vasectomy for a human costs about $500 – $1000

    If your lab is a female the proper comparison is for a tubal ligation.
    A tubal ligation for a human costs about $1000 – $3000

  11. From Wikipedia:
    Females (spaying)
    In female animals, spaying involves abdominal surgery to remove the ovaries and uterus (ovario-hysterectomy). Alternatively, it is also possible to remove only the ovaries (ovariectomy), which is mainly done in cats and young female dogs.

    So first, if someone’s lab has been spayed we know she isn’t male.

    Second, neither of the spay surgeries described is a tubal ligation.

  12. Windriven – I have often thought similar things on the comparable cost of veterinary care to human care. Here’s what I came up with.

    Firstly, while we may think the safety measures used on puppy surgery and anesthesia are adequate, do we consider them equally adequate for people? What are the comparable rates of death or life altering consequences from similar surgery between people and dogs?*

    Secondly, What part of the cost of a procedure is administrative costs? Payment methods are substantially different and U.S. human medical payment system is particularly expensive in terms of administrative hours.

    Thirdly, the cost of malpractice insurance, lawsuits and possibility defensive care prescribed to prevent lawsuits in human medical care may have some part in the discrepancy of costs. Also does the cost of education in a vet differ from that cost of education in a human doctor.

    Fourthly, cost of overhead, building rental, maintenance, cleaning costs differ. Human require a larger crate space. Risk of infections being spread between canine patient and human staff are lower, but still require attention.

    Fifthly, The actual difference between species and how care is used. When my dog was spayed she was a puppy with no other health conditions. She recovered quite well from anesthesia and appeared to be uncomfortable for about 24 hours, after that it was difficult to keep her from jumping around. When my mother had a hysterectomy due to uterine cancer she was 62, required IV pain medication and was in the hospital for, I believe four days or more days (this was almost 20 years ago). She required help to move around for days after. In some ways we human are a more delicate species than dogs. :)

    So, I might assume that much of the discrepancy in cost is due to the safety factor as well as the market value of human health care, but I can not really make such an argument until I have done something to account for the other factors. Who knows, maybe someone has done such a study. :)

    *One of my cats was ill and had a temperature of 107. I asked the vet if such a high fever can cause brain damage in a cat, like it does a human. The vet rather carefully explained that a cat could conceviably experience cognitive damage due to a fever, but since we generally don’t require a high level of cognitive skills from our pets, that risk is not as problematic as it is in humans.

  13. S.C. former shruggie says:

    Following those links can induce acute headache. The CAM-friendly or overly disbelief-suspending EBM links are prepared to split hairs, overlooking that the hairs are on an invisible unicorn, and that’s entirely the wrong animal.

    Meanwhile, I read in my local newspaper that we’re certifying “Angel Therapists” in this city. The story’s got the bulk of the front page, the photo, and an entire page 4 to itself. Complete with the conversion stories of a former “skeptic” and of the journalist. Insert broadswipe against journalists (and apologies to Brian Deer) here.

    Junk science and bogus treatments continue to metastasize. Perhaps a RCT could assesss the ability of EBM to prevent woo metastases?

  14. BillyJoe says:

    Excuse my ignorance.
    (And at least part of this should have been obvious shouldn’t it!)
    (And my brother is a vet!)

    De-sexing comes in two forms: spaying and castration.
    (I was thinking of sterilisation as performed on humans but, in animals, sterilisation is achieved by means of de-sexing)

    Spaying is the removal of both ovaries and part of the uterus.
    (the cervix is left to act as a natural barrier)
    Castration is the removal of both testicles.

    But to add a bit of confusion:

    In cats and young dogs, spaying may involve the removal of only the ovaries, leaving the uterus in situ.
    There is also “neutering” which often refers just to male animals only but can also refer to both males and female animals.
    And of course, then there’s “gelding” which is de-sexing/castration/neutering/sterilising specifically of male horses

    Carry on…

  15. pmoran says:

    Kimball, perhaps some further insight into what bothers me —?

    We probably have differing main interests within a confused mixture of shared concerns.

    We both wish to protect the integrity of medical science and scientific processes. We would like to stop people coming to harm from dangerous “alternative” methods. We would like to have more influence upon those liable to develop unrealistic expectations of “alternatives” as compared to what conventional medicine offers. Oh, — and we would like to stop the exploitation of the sick by out-and-out fraud.

    While science is common thread and a guiding light through these four, for each the battlefield is different, commanding differing strategies.

    I suspect your main interest is the first of the above, the integrity of science. So your arena is academia. Academics should be able to “get” the prior plausibility thing.

    My main concern is the third one, mainly cancer quackery, and I fear that nothing you have said about investigating CAM is very helpful for that. The argument is too circular. If you know in advance that CAM methods are useless then the wastage and ethical problems and the harm from investigating them are obvious.

    But my arena is full of people who don’t know this with sufficient certainty, and who are desperate enough to try very low prior plausibility methods even if they do have risks and have the potential to make them more miserable. This why I am not prepared to close the book on certain kinds of clinical study, just as you are happy with some kinds of basic research into foundational CAM principles.

  16. BillyJoe says:

    pmoran,

    I think you and Dr. Atwood actually have a difference of opinion about the same thing.

    You think certain types of clinical studies of implausible alternative treatments can persuade the public that they are useless. Dr. Atwood believes that they do not persuade the public at all. I suppose what we need to know is what the evidence shows.

    But people who use alternatives tend not to care about science and clincal trials. They believe there is another reality out there. For them, it’s all touchy/feely/spiritual rather than cold/hard/methodical science.

  17. BillyJoe,

    My vet prescribed chondroitin and glucosamine for my dog. I looked online, noted that it was considered worth paying for by European formularies but rejected as useless in the US. Whatever. I got some and gave it to my dog. Later a study was reviewed on SBM that confirmed that no, it really did nothing. So I kept giving the stuff to my dog until the bottle was empty and didn’t buy any more.

    Studies may not do anything for people who believe in Therapeutic Touch. But the do something for me.

  18. windriven says:

    @BillyJoe

    Perhaps in Australia. My (female) Lab got a full hysterectomy as her spay at a cost of a little under U$D 400. I am about to have a laparoscopic right hernia repair on an outpatient basis in Portland, OR. I will pay my entire $5000 deductible. Not sure what the total bill will be. Blue Shield will pay the overage.

  19. windriven says:

    @michele

    “Firstly, while we may think the safety measures used on puppy surgery and anesthesia are adequate, do we consider them equally adequate for people? What are the comparable rates of death or life altering consequences from similar surgery between people and dogs?*”

    When my middle daughter was 1, she hung near death with what proved to be a rotovirus. As part of the effort to diagnose her they did a gastroscope. I actually brought a pulse oximeter for them to use as one was not available. This was less than 20 years ago.

    My dog had basic hemodynamic monitoring during her procedure. I’m not suggesting that human and veterinary surgical protocols should be the same. But I do question the differential cost of care that often runs to orders of magnitude.

  20. desta says:

    “Human require a larger crate space.”

    @ micheleinmichigan:

    Some do, some just need to learn how to fold up better.
    ;)

  21. pmoran says:

    Billyjoe: You think certain types of clinical studies of implausible alternative treatments can persuade the public that they are useless. Dr. Atwood believes that they do not persuade the public at all. I suppose what we need to know is what the evidence shows.

    Yes, ideally. Many alt.cancer methods have declined markedly in popularity following negative clinical studies, but of course they tend to come in and out of fashion naturally anyway.

    I thought shark cartilage dropped out of sight remarkably abruptly after a rapid succession of negative studies, and it is difficult not to attribute the fall from grace of the Di Bella and Holt treatments to much-awaited, highly publicised negative evaluations.

    Also, for Pete’s sake, if negative clinical studies don’t have any effect on the public mind, the debunkings of scientists definitely won’t, so what are we here for? Let’s all close up shop and go home.

    We certainly need to be selective in what we investigate and smart in how we investigate it. Kimball goes part of the way towards what I have in mind with his approval of testing of what he refers to as the “sine qua non” elements of CAM theory. I have suggested a simple study which goes some way towards answering the question “how reliable is the cancer testimonial?”.

  22. anoopbal says:

    I am guessing if EBM is “working”, homeopathy should be debunked by now. why are they still finding studies when reviews and met-analysis shows not much benefit?

    Are the meta-analysis and reviews which looked at homeopathy and came out with negative results?If yes, they shouldn’t be funding them anymore.

  23. BillyJoe says:

    Alison,

    Perhaps I should have said “seek out and use” :)

  24. BillyJoe says:

    Windriven,

    Did the vet categorically state that your lab had a “total hysterectomy or are you assuming that is the case? Did he say “total hysterectomy” or “hysterectomy” – which, in this case, probably means “subtotal hysterectomy” (which leaves the cervix behind). Scroll down to figure 24 in the following link

    http://www.pet-informed-veterinary-advice-online.com/spaying-procedure.html

    “One or more hemostats are clamped across the uterine body, below the level of the uterine horns and just above the level of the cervix (the cervix is a sphincter-like muscle band located further down the uterine body, which forms a physical barrier between the abdominally-located uterus and the pelvically-located vagina)”

  25. BillyJoe says:

    A hint as to why a hysterectomy in a cat is so cheap.

    This vet does it in 3 minutes or 7 minutes from start of anesthetic.
    (I guess a dog might take a little longer)

    http://www.youtube.com/watch?v=WfumFyEz0WY

  26. desta, did I mention my dog weights 9 pounds? :) But I will say that the kennels our vet has for large dogs being kept for observation (the one’s that allow them to roam about some) are only slightly smaller than the space we had in my son’s hospital room after his last surgery. :)

    Windriven – I hope you daughter recovered well. Also, I did not mean to suggest you shouldn’t question the difference, only that I have wondered the same thing and was ruminating on some of the reasonably differences so that I might isolate what part of the price difference is “what the market will bear” or profit motivated.

  27. daedalus2u says:

    Michelle, I think there is a large component of “what the market will bear” in the cost of health care. I think that component of the cost is mostly extracted by the insurance companies by virtue of their (mostly) monopoly power in dealing with patients and the providers of health care.

    Health care delivery is a value-added chain, where each link in the chain needs to be able to sustain itself or the chain breaks.

    Why do insurance companies consume 1/3 of health care spending as administrative costs and profit? Because they can.

  28. I think a some portion of the prices charged to people who pay are used to cover the costs of people who don’t pay. Vets may have some of this, but I think not to the degree that hospitals do.

  29. Peter,

    I was thinking along the same lines as your “differing main interests.” I’d argue, however, that my main interest is not so much to protect the integrity of medical science for its own sake, but to blow the whistle when medical institutions and scientists contribute to people developing unrealistic expectations or being harmed or defrauded by ‘alternative’ methods. I think that there is plenty of evidence that this has happened in this era of ‘tolerance’ for quackery, and that although EBM wasn’t the major cause (that’d be politics), it has certainly been one of the major ‘enablers’. The editorial from the AHJ quoted in the post is an example. So is the experience of Susan Gurney’s friend:

    He was an artist—a painter and a sculptor—and he had little scientific knowledge. When Dr. Chabot was neutral about the Gonzalez protocol, and when Dr. Antman said nothing adverse about it, my friend assumed that they must genuinely believe that the treatment could work.

    And:

    We had [had] many conversations about treatment options…but the Gonzalez protocol quickly overwhelmed him; first by being impossibly time-consuming and then by being so physically debilitating. Had he realized this in early April, he would have had a real chance to examine his options. But once the decision was made to begin the Gonzalez protocol, with the apparent support of those involved in his care at Columbia Presbyterian, he became committed to it.

    By remaining neutral about the Gonzalez regimen, physicians at Columbia Presbyterian who place patients in this trial effectively preclude them from starting other options, because of the demands it places on patients and their families. If physicians believe they are truly being neutral by not fully explaining the Gonzalez protocol’s nature to cancer patients, it is they who are in denial.

    And:

    Ms. Gurney later wrote me about something she had not included in her article: that James Gordon, the chairman of the White House Commission on Complementary and Alternative Medicine Policy at the time, had “paid a personal visit to my friend’s house – and praised the Gonzalez protocol to him.”

    Here was a guy who, it seems, could have been dissuaded from being tortured by Gonzalez if only real biomedical scientists had had the integrity to tell him the truth. But no: remember the roles of Straus and Karen Antman (chief of Columbia’s division of medical oncology and a past president of the American Society of Clinical Oncology), discussed here.

    Regarding the circular argument, ie, “If you know in advance that CAM methods are useless then the wastage and ethical problems and the harm from investigating them are obvious”: the harm may be obvious to you and me and many readers here, but not so obvious to Antman and Straus and the pediatricians that I mentioned in the post above, or even to Ernst until recently. That’s the whole point of this series: to demonstrate how a recurrent misuse of EBM leads to wastage and ethical problems and harm (and unrealistic expectations).

    A point that I need to clarify for several commenters, by the way, is that this post is about claims for which, in Steve Simon’s words, “there is no scientifically valid reason to believe that (they could) work.” Thus I’m not talking about glucosamine/chondroitin sulfate, for example. I do believe that investigating some relatively plausible claims, however small their prior probabilities, can be useful for the fence-sitting public. I even suggested this with regard to G/CS several years before the results were published. But those investigations still must be done safely and ethically.

    Chelation, incidentally, still satisfied the “relatively plausible” criterion in 1960, but shortly after that was found to be useless, and it was already known to be dangerous. By the time the TACT began there was a fairly voluminous additional literature supporting chelation’s useless and dangerous character, so the TACT was entirely unjustified by reasonable, ethical criteria.

  30. BillyJoe says:

    pmoran,

    You get the impression that alternative treatments tend to fade when clinical trials show they are ineffective. On the other hand, I get the impression that many alternative treatments fall out of favour when patients realise they don’t work. I also get the impression that they then try the next alternative treatment that comes along. Psoriasis is a good example. There’s always another miracle cure around the corner. People seek out and use alternatives. They seem blind to the fact that they don’t work and are seemingly never discouraged from using seeking out and using alternatives in the future.

    Who is correct? Well, I have the impression that I am. :D

    Then there’s the enduring alternatives like chiropractic, acupuncture, and homoeopathy. They are like unsinkable rubber ducks. No matter how often they are debunked, they endure. And nearly every few months there’s yet another trial that shows they work. Of course, when you look, the trial is either methodologically flawed or doesn’t say what the authors say it says. “Acupuncture is effective in preventing migraine”, but you find that it is no more effective than placebo (sham acupuncture). Some reports even stated “Acupuncture and sham acupuncture equally effective in preventing migraine” instead of “Acupuncture no more effective than placebo”.

    Frankly, I dispair.

  31. BillyJoe says:

    Kimball Atwood,

    my main interest is … to blow the whistle when medical institutions and scientists contribute to people developing unrealistic expectations or being harmed or defrauded by ‘alternative’ methods.

    A group of Oncologists in Australia recently did a survey that found that 65% of cancer patients use alternative therapies and that 80% believe they work “even when there was little evidence to back up their use”. They then suggested that “it may be reasonable to offer CAM within the hospital environment so its use can be monitored and patients can receive more evidence-based care”.

  32. pmoran says:

    Billyjoe, don’t confuse the resilience of the promoters of “rubber ducks”, such as Ullman with homeopathy, and the ongoing belief of some users, with “what the public thinks”.

    Methods such as homeopathy and acupuncture are rarely ever used by themselves for serious diseases (in our culture). This suggests that the general public is not as stupid as we may be led to think it is from our observation of certain extremes. Nor will they be as immune to reason, including evidence from clinical studies.

    We sceptics often seem to think that we have “won” only when everyone sings our tune. Realistically, we are successful if can we keep a few people from harm.

    WRT your other point, I have sometimes wondered whether the constant informal testing of alt methods with alt.med somehow throws up an impression of their overall ineffectiveness.

    This would help explain why methods can go out of fashion within a decade or so, why anything new is so grasped at, and why serious users can find themselves using so many different methods at once. I came across one fellow who was taking 27 different supplements and supposed therapeutic agents, many of which were once claimed to be able to cure cancer on their own.

  33. pmoran says:

    Kimball”I’d argue, however, that my main interest is not so much to protect the integrity of medical science for its own sake, but to blow the whistle when medical institutions and scientists contribute to people developing unrealistic expectations or being harmed or defrauded by ‘alternative’ methods. I think that there is plenty of evidence that this has happened in this era of ‘tolerance’ for quackery, and that although EBM wasn’t the major cause (that’d be politics), it has certainly been one of the major ‘enablers ’”

    I guess I am hoping to find a middle way, one that recognises that CAM is also, in part, an inevitable and healthy societal reaction to unmet medical needs, and that “we know best” is a proven failure in eliminating it.

    In other words, can the risks of CAM be minimised by focussing on salient, immediate risks?

    Who really cares if a lot of people want to use homeopathic pillules when they have a cold? I don’t, and I know that it doesn’t follow that they will therefore rely upon equivalent nonsense if they get a serious disease. Considering the barrage of misinformation that they are exposed to, the public shows, overall, considerable discrimination.

    Some of the problem areas are understandable, as when a tiny, painless breast lump triggers a formidable succession of interventions, including multiple surgical procedures, radiotherapy, chemotherapy, and probably hormonal deprivation, with it being obvious to all that many go through all this and STILL die of their cancers.

    I know my task is not easy, because the target public itself has conflicting attitudes. On the whole they respect science, and do value its opinion upon dubious methods. But at the same time they don’t want to be bounded by it. They quite reasonably want the freedom to try unlikely methods even if they have a low chance of working. Deep human survival needs and unrelieved distress dictate that they should do so.

  34. windriven says:

    @BillyJoe

    I’ll ask the vet and report her response. But whether or not the cervix remains does not justify the differential. Again, I am not suggesting that spaying and hysterectomy should be valued the same. They shouldn’t.

    But science based medicine does not exist in a vacuum. The differences between EBM and SBM that Dr. Atwood highlights in this series are very important. But whatever the modality, there will be a cost and that cost will and should be measured against results – and against the cost of other services and goods.

  35. Joe says:

    @BillyJoe on 12 Dec 2010 at 3:08 pm wrote “You get the impression that alternative treatments tend to fade when clinical trials show they are ineffective. On the other hand, I get the impression that many alternative treatments fall out of favour when patients realise they don’t work.

    Actually, although some people abandon a particular AM, the schlock does not disappear. Somebody (I think Dr. Lipson) polled (a few years ago) asking which particular sCAMs have disappeared, and I don’t think anybody (including me) ever came up with one. Some (e.g., laetrile and the Gerson diet) have been moved out of the USA; but they have not vanished.

    In response to the poll (by Lipson?), I suggested that having goat testes being implanted in a man to treat “male problems” must be gone. But somebody replied with a reference to a place where a man can go to have that procedure today.

    The bottom line is that I don’t think any (many?) forms of quackery go out of business. Some customers may turn away; but many continue to believe even when they know they are inevitably dying.

  36. daedalus2u says:

    EBM can only look at the difference between two treatments.

    The only way to kill a CAM treatment via EBM is to test that CAM treatment against something better.

    That something better needs to be sufficiently better that the statistics clearly show that the CAM treatment is inferior. That is not going to happen by comparing CAM to doing nothing because a placebo treatment is better than nothing.

    For most of the treatments that CAM is used for, somatiform disorders, I am pretty sure my bacteria will work as that better treatment.

  37. Who really cares if a lot of people want to use homeopathic pillules when they have a cold? I don’t, and I know that it doesn’t follow that they will therefore rely upon equivalent nonsense if they get a serious disease. Considering the barrage of misinformation that they are exposed to, the public shows, overall, considerable discrimination.

    I mostly agree with this, and to the extent that I don’t it doesn’t have much to do with the topic of this post. It’s more in the context of “it’s too bad we don’t teach science better in grade school” and “it’s too bad that hucksters will always make a living merely because ‘there’s a sucker born every minute’.”

    My point in this post and in much of what I’ve written on SBM is that I don’t think that the bastions of academic medicine should be even the slightest part of the reason that some people want to use homeopathic pillules when they have a cold. And yet, without question, academic medicine is now an important part of the reason, because it has held crackpot homeopaths out to the public as legitimate physician/scientists (Jonas, Jacobs, Fisher, Reilly, etc.), has portrayed homeopathy rags as legitimate scientific journals, has openly promoted buffoons such as Ullman and various NDs, has staged numerous, pointless, sometimes dangerous (see above) homeopathy trials, and has refused (with occasional exceptions—three cheers to the British Doctors and Edzard Ernst, finally) to summarily and publicly reject homeopathy in spite of the overwhelming evidence against it (witness Cochrane, my pediatrician friend, the NIH, shruggies, etc.).

    We know vastly more than O.W. Holmes Sr. knew when he publicly and eloquently debunked homeopathy, and our added knowledge makes it all the more abundantly clear that he was correct. Yet, for some reason, we’ve recently become embarrassed or ashamed to tell the truth.

    I’m embarrassed for my profession.

  38. BillyJoe says:

    pmoran,

    “Who really cares if a lot of people want to use homeopathic pillules when they have a cold? I don’t…”

    I do. People should know what it is they are taking. They should know what homoeopathy is and they should be able to recognise it as the nonsense that it is. And they should refuse to spend time and money on these magic potions.

    There are those who profit from their ingorance. And there are governments who do not care that the people, who they were elected to protect, are being defrauded. And the world’s a little less rational and a little less scientific everytime someone buys a homoeopathic product. And the peddlers of this magic and nonsense win with each purchase of their potions.

  39. BillyJoe says:

    Joe,

    I said “fall out of favour”, not “disapppear”.

  40. vicki says:

    CAM is also sometimes used by people who don’t have a lot of faith in it, but are desperate.

    I have a relative with chronic pain. Her doctor is doing her best with a variety of medications (both NSAIDs and opiates), but she’s still in a lot of pain. So, she is considering a new diet her doctor has suggested on the theory that a lot of her problems are due to inflammation. She’s being cautious, because she’s worried about possible disordered eating, but she’s prepared to try eating more of a bunch of things like sunflower seeds and certain vegetables. Not because she thinks it’s likely, but because it might help, and pharmaceuticals, stretching, and icing aren’t doing enough.

  41. Werdna writes: “It occurs to me that if in general our null hypothesis (H0) is “nothing’s happening” and our beta is the usual 20%. Then doing a RCT under those conditions is clearly a mistake when our a priori P(H0) < 20%."

    I'd like to see the math on this. As I understand it, deciding whether an experiment is worthwhile would have to consider both the probabilities and the costs of Type I and Type II errors and the costs of conducting the experiment. I suspect that your calculations involve showing something along the lines of "a positive finding is more likely to be a false positive than a true positive" which is a not outrageous statement, but to me it just says that you needed to adjust your alpha and/or beta levels.

    You also ignore the possibility that half the world might assign a prior probability of one in a trillion and the other half might assign a prior probability of one in two, but that's a separate issue.

    Steve Simon, http://www.pmean.com

  42. tommyhj says:

    Reminds me… When I was a kid I thought I could fly if only I flailed something above my head sufficiently vigorous. I wanted to try it with a stuffed mouse I had, holding its tail. My sister, a few years older than me, insisted that it woudn’t work and that I would hurt myself.

    But I didn’t trust that her knowledge of the world was good enough to dismiss my brilliant theory – indeed, how would she know if she hadn’t tried it? Nobody was that smart, and I REALLY wanted to fly! So I tried it, fell down, and hurt myself.

    True story, that.

  43. daedalus2u says:

    Stephen, two competent clinicians can’t be looking at the same data and applying the same logic to it and one of them come up with a one in a trillion prior probability and one with one in two.

    Either one or both of them are grossly incompetent, or they are making stuff up or ignoring data. There is simply no way in SBM for that to happen.

  44. JMB says:

    You also ignore the possibility that half the world might assign a prior probability of one in a trillion and the other half might assign a prior probability of one in two, but that’s a separate issue.

    Arriving at an a priori probability is not a democratic process. There has to be some selection of people whose opinions will be considered in arriving at a subjective prior probability. In the application being described by Dr Atwood, the plausibility of homeopathy can be calculated by a simple average of anyone who has a Bachelor of Science in physics or chemistry. Since the plausibility estimate would be so low (.00001) , even if the consensus was one or even two orders of magnitude wrong, it would still be so far from what could be a reasonable threshold to justify the allocation of tax dollars. Now if the decision requires an accuracy sufficient to differentiate between .2 and .25, that would be very difficult. Then no selection/qualification process for the people who would contribute to the estimate would be acceptable. Because of the limitation of accuracy of expert opinion on prior probability, the threshold for justification of further investigation is set very low. However, that threshold is still not low enough to justify further investigation of homeopathy.

  45. Werdna says:

    “you needed to adjust your alpha and/or beta levels.”

    However as far as the medical research I read. These are pretty much standardized rarely do I see a medical study with an alpha other than 5%. So even assuming perfection in alto l other respects a lab that only tests say something impossible – say homeopathic interventions will still put out a positive result for homeopathy 1 in 20 times. Compound that with journals and/or labs that settle for small n studies – which are considerably cheaper and easier to run and top that off with publication bias – and it’s not hard to see how you can be pumping out more bad information than good.

    I suspect this will boil down to your is/ought confusion that you had with gorski over EBM. But go on..surprise me.

    “You also ignore the possibility that half the world might assign a prior probability of one in a trillion and the other half might assign a prior probability of one in two, but that’s a separate issue.”

    True it is a separate issue and one that you are probably oversimplifying but you would have to be more than a little thick to assume that I ignored it’s problematic nature especially when I specifically mention how “Granted a priori probability can be difficult and sometimes impossible (or infeasible) to bound. “

  46. JMB says:

    If you want to predict whether an experimental result is reproducible, p-values are useful.

    If you want to predict whether a patient will have better health as result of an intervention, you must analyze the published scientific data and the patient to arrive at your best prediction. Analysis of the experimental design and patient selection criteria are more important than the p-value in the process of predicting how the patient will respond.

    Implausibility implies that even with an low p-value, and low chance of type I or II errors, the experimental result will not be implemented in medical practice, it could only be used to justify further research. Furthermore, implausibility implies that no further research is justified when any data (even a p-value of .20) shows that the null hypothesis is not rejected.

  47. anoopbal says:

    For me the biggest concern is even after doing all this meta-analysis and studies, why can’t EBM reject treatments like accupuncture?

    So is EBM working? What should prompt the researchers to give up on accupuncture? a meta-anlysis in JAMA?

  48. Dr Benway says:

    Dr. Atwood and friends, please look at this: How to spot and handle suppression in medicine. Reminds one of Mr. Bolen, eh?

    Someone named Jake recently posted in favor of Dr. Gonzales on Orac’s blog: “I’ve spoken with 2 people that have been on his regimen and both have nothing but good to say about him. I’ve spoken to Ralph Moss about him and he admitted that his records show exceptional success with cancer treatment.”

    Ralph Moss is co-author of this bit of alt med history: An alternative approach to allergies.

  49. Ralph Moss is co-author of this bit of alt med history: An alternative approach to allergies.

    Moss is one of the original “Harkinites,” discussed here and here. Also an old pal of Gonzo.

Comments are closed.