Articles

Posts Tagged Gonzalez Regimen

Of SBM and EBM Redux. Part II: Is it a Good Idea to test Highly Implausible Health Claims?

Review

This is the second post in a series* prompted by an essay by statistician Stephen Simon, who argued that Evidence-Based Medicine (EBM) is not lacking in the ways that we at Science-Based Medicine have argued. David Gorski responded here, and Prof. Simon responded to Dr. Gorski here. Between that response and the comments following Dr. Gorski’s post it became clear to me that a new round of discussion would be worth the effort.

Part I of this series provided ample evidence for EBM’s “scientific blind spot”: the EBM Levels of Evidence scheme and EBM’s most conspicuous exponents consistently fail to consider all of the evidence relevant to efficacy claims, choosing instead to rely almost exclusively on randomized, controlled trials (RCTs). The several quoted Cochrane abstracts, regarding homeopathy and Laetrile, suggest that in the EBM lexicon, “evidence” and “RCTs” are almost synonymous. Yet basic science or preliminary clinical studies provide evidence sufficient to refute some health claims (e.g., homeopathy and Laetrile), particularly those emanating from the social movement known by the euphemism “CAM.”

It’s remarkable to consider just how unremarkable that last sentence ought to be. EBM’s founders understood the proper role of the rigorous clinical trial: to be the final arbiter of any claim that had already demonstrated promise by all other criteria—basic science, animal studies, legitimate case series, small controlled trials, “expert opinion,” whatever (but not inexpert opinion). EBM’s founders knew that such pieces of evidence, promising though they may be, are insufficient because they “routinely lead to false positive conclusions about efficacy.” They must have assumed, even if they felt no need to articulate it, that claims lacking such promise were not part of the discussion. Nevertheless, the obvious point was somehow lost in the subsequent formalization of EBM methods, and seems to have been entirely forgotten just when it ought to have resurfaced: during the conception of the Center for Evidence-Based Medicine’s Introduction to Evidence-Based Complementary Medicine.

Thus, in 2000, the American Heart Journal (AHJ) could publish an unchallenged editorial arguing that Na2EDTA chelation “therapy” could not be ruled out as efficacious for atherosclerotic cardiovascular disease because it hadn’t yet been subjected to any large RCTs—never mind that there had been several small ones, and abundant additional evidence from basic science, case studies, and legal documents, all demonstrating that the treatment is both useless and dangerous. The well-powered RCT had somehow been transformed, for practical purposes, from the final arbiter of efficacy to the only arbiter. If preliminary evidence was no longer to have practical consequences, why bother with it at all? This was surely an example of what Prof. Simon calls “Poorly Implemented Evidence Based Medicine,” but one that was also implemented by the very EBM experts who ought to have recognized the fallacy.

There will be more evidence for these assertions as we proceed, but the main thrust of Part II is to begin to respond to this statement from Prof. Simon: “There is some societal value in testing therapies that are in wide use, even though there is no scientifically valid reason to believe that those therapies work.”

(more…)

Posted in: Chiropractic, Clinical Trials, Energy Medicine, Health Fraud, History, Homeopathy, Medical Academia, Medical Ethics, Naturopathy, Politics and Regulation, Science and Medicine

Leave a Comment (49) →

Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 1

Background: the distinction between EBM and SBM

An important theme on the Science-Based Medicine blog, and the very reason for its name, has been its emphasis on examining all the evidence—not merely the results of clinical trials—for various claims, particularly for those that are implausible. We’ve discussed the distinction between Science-Based Medicine (SBM) and the more limited Evidence-Based Medicine (EBM) several times, for example here (I began my own discussion here and added a bit of formality here, here, and here). Let me summarize by quoting John Ioannidis:

…the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance.

EBM, in a nutshell, ignores prior probability† (unless there is no other available evidence) and falls for the “p-value fallacy”; SBM does not. Please don’t bicker about this if you haven’t read the links above and some of their own references, particularly the EBM Levels of Evidence scheme and two articles by Steven Goodman (here and here). Also, note that it is not necessary to agree with Ioannidis that “most published research findings are false” to agree with his assertion, quoted above, about what determines the probability that a research finding is true.

The distinction between SBM and EBM has important implications for medical practice ethics, research ethics, human subject protections, allocation of scarce resources, epistemology in health care, public perceptions of medical knowledge and of the health professions, and more. EBM, as practiced in the 20 years of its formal existence, is poorly equipped to evaluate implausible claims because it fails to acknowledge that even if scientific plausibility is not sufficient to establish the validity of a new treatment, it is necessary for doing so.

Thus, in their recent foray into applying the tools of EBM to implausible health claims, government and academic investigators have made at least two, serious mistakes: first, they have subjected unwary subjects to dangerous but unnecessary trials in a quest for “evidence,” failing to realize that definitive evidence already exists; second, they have been largely incapable of pronouncing ineffective methods ineffective. At best, even after conducting predictably disconfirming trials of vanishingly unlikely claims, they have declared such methods merely “unproven,” almost always urging “further research.” That may be the proper EBM response, but it is a far cry from the reality. As I opined a couple of years ago, the founders of the EBM movement apparently “never saw ‘CAM’ coming.”

(more…)

Posted in: Cancer, Clinical Trials, Medical Academia, Medical Ethics, Politics and Regulation, Science and Medicine

Leave a Comment (59) →