One of my favorite television shows right now is The Knick, as I described before in a post about medical history. To give you an idea of how much I’m into The Knick, I’ll tell you that I signed up for Cinemax for three months just for that one show. (After its second season finale airs next Friday, I’ll drop Cinemax until next fall.) The reason why I’m bringing up The Knick (besides I love the show and need to bring it up at least once a year) is because an article by Malcolm Gladwell in The New Yorker entitled “Tough Medicine“, which is a commentary based on a new book on cancer by a veritable god of cancer research, Vincent T. DeVita, Jr., immediately resonated with a storyline in this season of The Knick. I haven’t yet read The Death of Cancer: After Fifty Years on the Front Lines of Medicine, a Pioneering Oncologist Reveals Why the War on Cancer Is Winnable–and How We Can Get There by Vincent T. DeVita and Elizabeth DeVita-Raeburn, but I want to. I can tell, though, that there will be parts of the book I find annoying just from Gladwell’s take on it, which approvingly describes DeVita as railing against the cautiousness and incremental nature of today’s cancer research. To give you an idea of where Gladwell’s coming from, I note that his article shows up in the title bar of my web browser not as “Tough Medicine” but rather “How To Cure Cancer”, even as the title on the web page itself remains “Tough Medicine”. On the other hand, the article does conclude with Gladwell demonstrating a better understanding of the disadvantages of what DeVita is proposing than it seems that he will in the beginning. In fact, it is Gladwell who is more reasonable than his subject, although he does appear share DeVita’s apparent assumption that potentially all cancer patients are savable if only we try hard enough. (more…)
Posts Tagged NCI
I am formally requesting that Cancer retract an article claiming that psychotherapy delays recurrence and extends survival time for breast cancer patients. Regardless of whether I succeed in getting a retraction, I hope I will prompt other efforts to retract such articles. My letter appears later in this post.
In seeking retraction, I cite the standards of the Committee on Publication Ethics (COPE) for retraction. Claims in the article are not borne out in simple analyses that were not provided in the article, but should have been. The authors instead took refuge in inappropriate multivariate analyses that have a high likelihood of being spurious and of capitalizing on chance.
The article exemplifies a much larger problem. Claims about innovative cancer treatments are often unsubstantiated, hyped, lacking in a plausible mechanism, or are simply voodoo science. We don’t have to go to dubious websites to find evidence of this. All we have to do is search the peer-reviewed literature with Google Scholar or PubMed. Try looking up therapeutic touch (TT).
I uncovered unsubstantiated claims and implausible mechanisms that persisted after peer review in another blog post about the respected, high journal-impact-factor (JIF = 18.03) Journal of Clinical Oncology. We obviously cannot depend on the peer review processes to filter out this misinformation. The Science-Based Medicine blog provides tools and cultivates skepticism not only in laypersons, but in professionals, including, hopefully, reviewers who seem to have deficiencies in both. However, we need to be alert to opportunities not just to educate, but to directly challenge and remove bad science from the literature. (more…)
Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 1
Background: the distinction between EBM and SBM
An important theme on the Science-Based Medicine blog, and the very reason for its name, has been its emphasis on examining all the evidence—not merely the results of clinical trials—for various claims, particularly for those that are implausible. We’ve discussed the distinction between Science-Based Medicine (SBM) and the more limited Evidence-Based Medicine (EBM) several times, for example here (I began my own discussion here and added a bit of formality here, here, and here). Let me summarize by quoting John Ioannidis:
…the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance.
EBM, in a nutshell, ignores prior probability† (unless there is no other available evidence) and falls for the “p-value fallacy”; SBM does not. Please don’t bicker about this if you haven’t read the links above and some of their own references, particularly the EBM Levels of Evidence scheme and two articles by Steven Goodman (here and here). Also, note that it is not necessary to agree with Ioannidis that “most published research findings are false” to agree with his assertion, quoted above, about what determines the probability that a research finding is true.
The distinction between SBM and EBM has important implications for medical practice ethics, research ethics, human subject protections, allocation of scarce resources, epistemology in health care, public perceptions of medical knowledge and of the health professions, and more. EBM, as practiced in the 20 years of its formal existence, is poorly equipped to evaluate implausible claims because it fails to acknowledge that even if scientific plausibility is not sufficient to establish the validity of a new treatment, it is necessary for doing so.
Thus, in their recent foray into applying the tools of EBM to implausible health claims, government and academic investigators have made at least two, serious mistakes: first, they have subjected unwary subjects to dangerous but unnecessary trials in a quest for “evidence,” failing to realize that definitive evidence already exists; second, they have been largely incapable of pronouncing ineffective methods ineffective. At best, even after conducting predictably disconfirming trials of vanishingly unlikely claims, they have declared such methods merely “unproven,” almost always urging “further research.” That may be the proper EBM response, but it is a far cry from the reality. As I opined a couple of years ago, the founders of the EBM movement apparently “never saw ‘CAM’ coming.”