Articles

Answering a criticism of science-based medicine

Attacks on science-based medicine (SBM) come in many forms. There are the loony forms that we see daily from the anti-vaccine movement, quackery promoters like Mike Adams and Joe Mercola, those who engage in “quackademic medicine,” and postmodernists who view science as “just another narrative,” as valid as any other or even view science- and evidence-based medicine as “microfascism.” Sometimes, these complaints come from self-proclaimed champions of evidence-based medicine (EBM) who, their self-characterization otherwise, show signs of having a bit of a soft spot for the ol’ woo. Then sometimes there are thoughtful, serious criticisms of some of the assumptions that underlie SBM.

The criticism I am about to address tries to be one of these but ultimately fails because it attacks a straw man version of SBM.

True, the criticism of SBM I’m about to address does come from someone named Steve Simon, who vocally supports EBM but doesn’t like the the criticism of EBM implicit in the very creation of the concept of SBM. Simon has even written a very good deconstruction of postmodern attacks on evidence-based medicine (EBM) himself, as well as quite a few other good discussions of medicine and statistics. Unfortunately, in his criticism, Simon appears to have completely missed the point about the difference between SBM and EBM. As a result, his criticisms of SBM wind up being mostly the application of a flamethrower to a Burning Man-sized straw man representing what he thinks SBM to be. It makes for a fun fireworks show but is ultimately misdirected, a lot of heat but little light. For a bit of background, Simon’s post first piqued my curiosity because of its title, Is there something better than Evidence Based Medicine out there? The other reason that it caught my attention was the extreme naiveté revealed in the arguments used. In fact, Simon’s naiveté reminds me very much of my very own naiveté about three years ago.

Here’s the point where I tell you a secret about the very creation of this blog. Shortly after Steve Novella invited me to join, the founding members of SBM engaged in several e-mail frank and free-wheeling exchanges about what the blog should be like, what topics we wanted to cover, and what our philosophy should be. One of these exchanges was about the very nature of SBM and how it is distinguished from EBM, the latter of which I viewed as the best way to practice medicine. During that exchange, I made arguments that, in retrospect, were eerily similar to the ones by Simon that I’m about to address right now. Oh, how epic these arguments were! In retrospect, I can but shake my head at my own extreme naiveté, which I now see mirrored in Simon’s criticism of SBM. Yes, I was converted, so to speak (if you’ll forgive the religious terminology), which is why I see in Simon’s article a lot of my former self, at least in terms of how I used to view evidence in medicine.

The main gist of Simon’s complaint comes right at the beginning of his article:

Someone asked me about a claim made on an interesting blog, Science Based Medicine. The blog claims that Science Based Medicine (SBM), that tries to draw a distinction between that practice and Evidence Based Medicine (EBM). SBM is better because “EBM, in a nutshell, ignores prior probability (unless there is no other available evidence and falls for the p-value fallacy; SBM does not.” Here’s what I wrote.

No. The gist of the science based medicine blog appears to be that we should not encourage research into medical therapies that have no plausible scientific mechanism. That’s quite a different message, in my opinion, that the message promoted by the p-value fallacy article by Goodman.

First off, Simon’s complaint makes me wonder if he actually read Dr. Atwood’s entire post. To show you what I mean, I present here the whole quote from Dr. Atwood in context:

EBM, in a nutshell, ignores prior probability† (unless there is no other available evidence) and falls for the “p-value fallacy”; SBM does not. Please don’t bicker about this if you haven’t read the links above and some of their own references, particularly the EBM Levels of Evidence scheme and two articles by Steven Goodman (here and here). Also, note that it is not necessary to agree with Ioannidis that “most published research findings are false” to agree with his assertion, quoted above, about what determines the probability that a research finding is true.

Simon, unfortunately, decides to bicker. In doing so, he builds a massive straw man. I’m going to jump ahead to the passage the most reveals Simon’s extreme naiveté:

No thoughtful practitioner of EBM, to my knowledge, has suggested that EBM ignore scientific mechanisms.

Talk about a “no true Scotsman” fallacy!

You know, about three years ago I can recall writing almost exactly the same thing in the aforementioned epic e-mail exchange arguing the very nature of EBM versus SBM. The problem, of course, is not that EBM completely ignores scientific mechanisms. That’s every bit as much of a straw man characterization of SBM as the characterization that Simon skewered of EBM being only about randomized clinical trials (RCTs). The problem with EBM is, rather, that it ranks basic science principles as being on either very lowest rung or the second lowest rung on the various hierarchies of evidence that EBM promulgates as the way to evaluate the reliability of scientific evidence to be used in deciding which therapies work. The most well-known of these is the that published by the Centre for Evidence-Based Medicine, but there are others. Eddie Lang, for instance, places basic research second from the bottom, just above anecdotal clinical experience of the sort favored by Dr. Jay Gordon (see Figure 2). Duke University doesn’t even really mention basic science; rather it appears to lump it together at the very bottom of the evidence pyramid under “background information.” When I first started to appreciate the difference between EBM and SBM, I basically had to be dragged, kicking and screaming, by Steve and Kimball, to look at these charts and realize that, yes, in the formal hierarchies of evidence used by the major centers for EBM, basic science and plausible scientific mechanisms do rank at or near the bottom. I didn’t want to accept that it was true. I really didn’t. I didn’t want to believe that SBM is not synonymous with EBM, which would be as it should be in an ideal world. Simon apparently doesn’t either:

Everybody seems to criticize EBM for an exclusive reliance on randomized clinical trials (RCTs). The blog uses the term “methodolatry” in this context. A group of nurses who advocate a post-modern philosophical approach to medical care also criticized EBM and used an even stronger term, micro-fascism, to describe the tendency of EBM to rely exclusively on RCTs.

But I have not seen any serious evidence of EBM relying exclusively on RCTs. That’s certainly not what David Sackett was proposing in the 1996 BMJ editorial “Evidence based medicine: what it is and what it isn’t”. Trish Greenhalgh elaborates on quite clearly in her book “How to Read a Paper: The Basics of Evidence Based Medicine” that EBM is much more than relying on the best clinical trial. There is, perhaps, too great a tendency for EBM proponents to rely on checklists, but that is an understandable and forgivable excess.

I must to admit to considerable puzzlement here. EBM lists randomized clinical trials (RCTs) and meta-analyses or systematic reviews of RCTs as being the highest form of evidence, yet Simon says he sees no serious evidence of EBM relying exclusively on RCTs. I suppose that’s true in a trivial sort of way, given that there are conditions and questions for which there are few or no good RCTs. When that is the case, one has no option but to rely on “lower” forms of evidence. However, the impetus behind EBM is to use RCTs wherever possible in order to decide which therapies are best. If that weren’t true, why elevate RCTs to the very top of the evidence hierarchy? Simon is basically misstating the the complaint anyway. We do not criticize EBM for an “exclusive” reliance on RCTs but rather for an overreliance on RCTs devoid of scientific context.

Simon then decides to try to turn the charge of “methodolatry,” or as revere once famously called it, the profane worship of the randomized clinical trial as the only valid method of investigation, against us.This misinterpretation of what SBM is leads Simon, after having accused SBM of leveling straw man attacks against EBM, to building up that aforementioned Burning Man-sized straw man himself, which he then begins to light on fire with gusto:

I would argue further that it is a form of methodolatry to insist on a plausible scientific mechanism as a pre-requisite for ANY research for a medical intervention. It should be a strong consideration, but we need to remember that many medical discoveries preceded the identification of a plausible scientific mechanism.

While this is mostly true, one might point out that, once the mechanisms behind such discoveries were identified, all of them had a degree of plausibility in that they did not require the overthrow of huge swaths of well-settled science in order to be accepted as valid. Let’s take the example of homeopathy. I use homeopathy a lot because it is, quite literally, water and because its proposed mechanism of action goes against huge swaths of science that has been well-characterized for centuries. I’m not just talking one scientific discipline, either. For homeopathy to be true, much of what we currently understand about physics, chemistry, and biology would have to be, as I am wont to say, not just wrong, but spectacularly wrong. That is more than just lacking prior plausibility. It’s about as close to being impossible as one can imagine in science. Now, I suppose there is a possibility that scientists could be spectacularly wrong about so much settled science at once. If they are, however, it would take compelling evidence on the order of the mass of evidence that supports the impossibility of homeopathy to make that possibility worth taking seriously. Extraordinary claims require extraordinary evidence. RCTs showing barely statistically significant effects do not constitute extraordinary evidence, given that chance alone will guarantee that some RCTs will be positive even in the absence of an effect and the biases and deficiencies even in RCTs. Kimball explains this concept quite well:

When this sort of evidence [the abundant basic science evidence demonstrating homeopathy to be incredibly implausible] is weighed against the equivocal clinical trial literature, it is abundantly clear that homeopathic “remedies” have no specific, biological effects. Yet EBM relegates such evidence to “Level 5”: the lowest in the scheme. How persuasive is the evidence that EBM dismisses? The “infinitesimals” claim alone is the equivalent of a proposal for a perpetual motion machine. The same medical academics who call for more studies of homeopathy would be embarrassed, one hopes, to be found insisting upon “studies” of perpetual motion machines. Basic chemistry is still a prerequisite for medical school, as far as I’m aware.

Yes, Simon is indeed tearing down a straw man. As Kimball himself would no doubt agree, even the most hardcore SBM aficianado does not insist on a plausible scientific mechanism as a “pre-requisite” for “ANY” research, as Simon claims. Rather, what we insist on is that the range of potential mechanisms proposed do not require breaking the laws of physics or that there be highly compelling evidence that the therapy under study actually has some sort of effect sufficient to make us doubt our understanding of the biology involved.

Simon then appeals to there being some sort of “societal value” to test interventions that are widely used in society even when those interventions have no plausible mechanism. I might agree with him, except for two considerations. First, no amount of studies will convince, for example, homeopaths that homeopathy doesn’t work. Witness Dana Ullman if you don’t believe me. Second, research funds are scarce and likely to become even more so over the next few years. From a societal perspective, it’s very hard to justify allocating scarce research dollars to the study of incredibly implausible therapies like homeopathy, reiki, or therapeutic touch. (After all, reiki is nothing more than faith healing based on Eastern mystic religious beliefs rather than Christianity.) Given that, for the foreseeable future, research funding will be a zero sum game, it would be incredibly irresponsible to allocate funds to studies of magic and fairy dust like homeopathy, knowing that those are funds that won’t be going to treatment modalities that might actually work.

When it all comes down to it, I think that Simon is, as I was, in denial. When confronted with the whole concept of SBM compared to EBM, I denied what I didn’t want to believe. To me, it seemed so utterly obvious that the scientific plausibility of the hypothesis under study has to be taken into account in evaluating the evidence. I just couldn’t imagine that any system of evaluating evidence could be otherwise; it made no sense to me. So I imposed this common-sense view on EBM, and I rather suspect that many other advocates of EBM like Simon labor under the same delusion I did. The problem is, though, that critics of EBM are basically correct on this score. Still, realizing it or admitting it did not come easy. For me to accept that EBM had a blind spot when it came to basic science, it took having my face rubbed in unethical and scientifically dubious trials like that of the Gonzalez therapy for pancreatic cancer or chelation therapy for cardiovascular disease. Let’s put it this way. To be willing to waste money studying something that is nothing but water and has as its “scientific basis” a hypothesis that is the equivalent of claiming that a perpetual motion machine can be constructed tells me that basic science basically means close to nothing. Ditto wasting money on studying a therapy whose major component is coffee enemas used to treat a deadly cancer. Simon cheekily suggests at the end of his post that “maybe we should distinguish between EBM and PIEBM (poorly Implemented Evidence Based Medicine). The problem is, trials of therapies like the Gonzalez regimen, homeopathy, and reiki are a feature of, not a bug in EBM. In fact, I challenge Simon to provide a rationale under EBM as it is currently constituted to justify not having to do a clinical of these therapies. There is none.

I realize that others have said it before here (and probably said it better than I), but we at SBM are not hostile to EBM at all. Rather, we view EBM as incomplete, a subset of SBM. It’s also too easily corrupted to provide an air of scientific legitimacy to fairy dust like homeopathy and reiki. These problems, we argue, can be ameliorated by expanding EBM into SBM. Personally, I suspect that the originators of EBM, as I do (and, I suspect, Simon does), never thought of the possibility of EBM being applied to hypotheses as awe-inspiringly implausible as those of CAM. It simply never occurred to them; they probably assumed that any hypothesis that reaches a clinical trial stage must have good preclinical (i.e., basic science) evidence to support its efficacy. But we know now that this isn’t the case. I can’t speak for everyone else here, but after agreeing with Kimball that EBM ought to be synonymous with SBM I also express the hope that one day there will be no distinction between SBM and EBM. Unfortunately, we aren’t there yet.

NOTE: There will be one more post later today; so don’t go away just yet.

Posted in: Clinical Trials, Medical Academia, Science and Medicine

Leave a Comment (56) ↓

56 thoughts on “Answering a criticism of science-based medicine

  1. moderation says:

    As a physician who had previosly seen EBM as the penultimate way to approach the practice of medicine before finding my way to this blog, I believe this quote from your last paragraph is the clearest and most succinct statement of why I now find EBM incomplete that I have seen:

    “I suspect that the originators of EBM … never thought of the possibility of EBM being applied to hypotheses as awe-inspiringly implausible as those of CAM. It simply never occurred to them; they probably assumed that any hypothesis that reaches a clinical trial stage must have good preclinical (i.e., basic science) evidence to support its efficacy.”

  2. twaza says:

    David

    I think that your rant devalues the SBM brand.

    It would be helpful if you could focus more on issues than individuals, and if you could check your facts before you criticise.

    For example, you say “EBM lists randomized clinical trials (RCTs) and meta-analyses or systematic reviews of RCTs as being the highest form of evidence”. This may be true of some individuals, but in the world I work in, the problems with simplistic hierarchies of evidence are widely understood.

  3. @ twaza:

    Surely you clicked on Dr. Gorski’s links to formal EBM evidence hierarchies (that way you can check his facts). To give you the benefit of the doubt, you seem to be doing what Simon and many others have done: assume that since you and your friends have enough common sense to place EBM in the appropriate scientific context, everybody does. Which simply isn’t true, as exemplified by Cochrane itself, by the Center for EBM’s “Introduction to CAM,” and much more. Please read these links in the post above for examples:

    Yes, Jacqueline: EBM ought to be Synonymous with SBM

    Homeopathy and Evidence-Based Medicine: Back to the Future Part V

  4. David Gorski says:

    Thanks. Kim. I do find it rather frustrating to be accused of “not checking my facts” by someone who appears not to have clicked on the links I used to back up my assertions. Ditto being accused of “focusing on the individual.” It would appear that to twaza, “focusing on the individual” = “answering fallacious arguments an individual makes.” In any case, there were no ad hominems here. As for this post being a “rant,” I find that criticism puzzling too. Oh, don’t get me wrong. I’ve written rants before for SBM. This just isn’t one of them.

    I think you’re right, though. twaza seems to have fallen into exactly the same thinking that Steve Simon has and that I used to, assuming that the misuse of EBM by CAMsters is simply a bug in the system, the result of people misusing EBM. As you’ve shown, such trials are a feature, not a bug, in EBM. Let’s put it this way. Perhaps twaza could educate me by, using the EBM hierarchies of evidence, providing me a rationale how we can reject homeopathy as not working without actually doing clinical trials. If EBM is as twaza says, it should be child’s play, don’t you think?

  5. moderation: “a physician who had previosly seen EBM as the penultimate way to approach the practice of medicine before finding my way to this blog”

    “Penultimate” means “second-to-last.” As in, “When pronouncing ‘Tanzania,’ emphasis is on the penultimate syllable.”

  6. “Burning Man-sized straw man.” Love it.

  7. Scott says:

    I actually disagree with the analogy of homeopathy being like a perpetual motion machine. It goes further than that. I’d say that a perpetual motion machine would be like saying that dilution has no effect on the remedy. In both cases, a fundamental law of nature (the first law of thermodynamics/the law of mass action) is being disregarded.

    Homeopathy is more like taking a perpetual motion machine and stipulating that energy may be extracted from it indefinitely. Not just disregarding the fundamental law of nature, but expressly reversing it.

  8. windriven says:

    I must say that prior to this post the difference between EBM and SBM was not starkly clear in my mind. I tended to see EBM as holding an unseemly embrace with anecdote. Dr. Gorski’s explication gave me a better appreciation of EBM and a clear understanding of the line that separates it from SBM.

    I thought the last paragraph summed the post perfectly, saying in effect, that SBM is what EBM should have been. The difference is that SBM erects guard rails of scientific plausibility while EBM, without the strictures of scientific rigor, can be pushed effortlessly into the weeds by passing fashions and passions.

  9. rork says:

    If SBM is so great, it will be able to offer specific non-fuzzy alterations into how EBM operates, and be able to get those implemented, since surely all will agree with a change for the better. So what’s the problem folks? I think it’s somewhere around “specific non-fuzzy”, but you’d know better.
    Group priors – have you thought about it much?

  10. If SBM is so great, it will be able to offer specific non-fuzzy alterations into how EBM operates, and be able to get those implemented, since surely all will agree with a change for the better. So what’s the problem folks? I think it’s somewhere around “specific non-fuzzy”, but you’d know better.
    Group priors – have you thought about it much?

    Yes:

    Prior Probability: The Dirty Little Secret of “Evidence-Based Alternative Medicine”

    Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued

    Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued Again

    More to the point is that heavy hitters in biostatistics have also thought about it and have been pulling for it for well over a decade (see the Goodman papers linked from those posts), but have had little success other than lip service from a couple of editors. Our only contribution (for the most part ignored) has been to point to highly implausible claims as the most blatant illustrations that “alterations” in EBM are needed.

    What’s the problem, ie, why hasn’t a “change for the better” been accepted? Several things, some mundane, some less so. One of the mundane: most physicians, reviewers, editors, NIH scientists, and the rest tend to be conservative (with a small “c”). It ain’t easy overthrowing such an entrenched habit as frequentist inference, especially if you believe that it works (see below).

    Less mundane: very few of the relevant people understand that they are fooling themselves with p values, confidence intervals, and the rest. They think, because they have been taught this, that these parameters can provide wholly objective evidence for the question being studied. They simply won’t accept, even when it is explained to them in a way that they are perfectly capable of following, that this cannot be true: it is a logical fallacy resulting from the confusion of deductive and inductive reasoning.

    Intimately related to this is an understandable distaste for injecting a “subjective” element into conclusions about experiments, which is why Bayesian inference is so widely frowned upon. Ironically, most physicians believe that the subjective aspect of Bayes is itself a bias of the worst kind. What they don’t understand is that such a subjective element exists whether they like it or not, even in the frequentist world, but it would be better to pull it out of the closet where it can be examined by everyone. They also can’t imagine that the sky won’t fall when that happens, one reason being that it isn’t necessary to specify “non-fuzzy” priors; it’s only necessary to find out how any prior—skeptical or generous—will be altered by the data of this particular experiment. (I agree with you about “specific non-fuzzy” being a barrier to this change: it’s a common misunderstanding about using Bayes).

    Goodman discusses these misunderstandings, and also makes the point that Bayes’ Factor—an entirely objective term, derived just from the current data—is a better measure of the strength of those data than is the p value, which almost always overestimates deviation from the null (which, it should be reiterated, is illogical in the first place, because the p value assumes the null to be true).

    Another good discussion of why we shouldn’t be afraid of priors is by Sander Greenland, here.

  11. David Gorski says:

    Damn. Kimball beat me to responding, and did so in far more detail than I probably could have.

    As an aside, I must point out that I find the implication that we haven’t thought through the issue of prior probability to be mildly annoying, particularly since we have been thinking and writing about these issues from before the very beginning. Personally, though, I tend to take a somewhat different view than Kimball in that I tend to emphasize prior improbability. In other words, if a therapy, for it to work, requires the breaking of multiple well-established scientific laws in multiple disciplines (as homeopathy does), then I think how to deal with it is rather obvious. When it comes to therapies not so implausible (herbs, for instance), I tend to give more of the benefit of the doubt. Yes, I know that’s fuzzy, but consider it a philosophical underpinning to how to deal with the hard analytic techniques described by Kimball.

    Hmmm. Maybe I haven’t quite shaken off my prior allegiance to EBM after all. :-)

  12. Makes me think of a completely different discussion.

    Ancient Greeks citizens (that is, men) recognized different things they could get from their relationships with women. It went something like this:
    1) a prostitute: great sex.
    2) a concubine: companionship and stimulating conversation.
    3) a wife: legitimate children.

    The two basic ways we read this today are what I will metaphorically call the EBM and SBM ways.

    EBMers recognize a hierarchy. It looks like being a wife must have really sucked back then because so little was expected of you: you were really just a breeding cow.

    SBMers recognize a hierarchy. It looks like the expectations an ancient Greek married couple had of their relationship was not that different from modern expectations. They would look to their relationship for great sex, companionship, stimulating conversation, and the raising of their children. Prostitutes, then as now, did their job and were off the hook for entertaining you or raising your children.

    That is, there is a way to interpret the hierarchy where the “lower” function is discarded in favour of the “higher” function. A wife is better than a prostitute because legitimate children — which only a wife can provide — are more important than getting your rocks off — which is what a prostitute specializes in. Likewise, RTCs are better than anecdote or hypothesizing because only RTCs can tell you whether something actually works.

    There is also a way to interpret the hierarchy where the “higher” function is added on to the “lower” function. A wife is better than a prostitute because she can give you everything you could want from a relationship with a woman whereas a prostitute can only give you one thing. Likewise, large-scale RTCs are better than theoretically-based hypotheses because they give you everything: by the time a large-scale RTC is conducted, the question being tested has come up through the ranks with anecdote, theoretical support and small trials.

    An RCT with no other support would be like a loveless marriage — completely pointless. If you don’t have sex, you can be married three ways from Sunday and there will still be no legitimate children. If there is no theoretical grounding, you can have the biggest RCT you like, there will still be no useful results.

  13. Ahem. RTC above should read RCT.

  14. nybgrus says:

    As usual, an excellent post. I often find myself falling into exactly the trap EBM is slipped into – an assumption that people will apply (I am tempted to say ‘common sense’ here, but really, it just isn’t so common anymore) intellectual honesty to their questions and endeavors – or at least to the ones that really matter. But this is sadly not the case and often the exact opposite of the truth.

    And as a plug, I began laughing while reading Orac’s post (http://scienceblogs.com/insolence/2010/10/a_fallacy-laden_attack_on_science-based_medicine.php) and when my girlfriend asked why I told her it was the reference to a “flamethrower of sarcasm and jackboots of science.” So she doodled a depiction of that which makes me laugh and smile. Perhaps we could fit Dr. Gorski with a set of boots and a flamethrower as well:

    http://1337percent.tumblr.com/post/1508292102/let-me-introduce-my-boyfriend-dr-drey-purveyor

  15. qetzal says:

    rork writes:

    If SBM is so great, it will be able to offer specific non-fuzzy alterations into how EBM operates….

    At the most basic level, I think the authors here have done that repeatedly. E.g., in this post Dr. Gorski wrote:

    We do not criticize EBM for an “exclusive” reliance on RCTs but rather for an overreliance on RCTs devoid of scientific context.

    In other words, the specific alternative is to evalate RCTs within a proper scientific context. Perhaps you’ll consider that too fuzzy, but it would be simple to implement in some very basic yet highly beneficial ways. At minimum, EBM rubrics like the ones linked in the post should be modified to always evaluate clinical evidence in light of scientific plausibility, and to dramatically downgrade any clinical evidence that contradicts well-established, fundamental science. Authors of EBM publications should be required to explicitly discuss the scientific plausibility of the intervention in question.

    More specific alterations will doubtless be needed. E.g., what should happen if intermediate strength clinical evidence is apparently contradicted by intermediate strength non-clinical evidence? Thinks like that will need further consideration. But for now, simply requiring some explicit consideration of the basic science would go a long way. All IMO, of course.

  16. So I’m naive and in denial. I love it! At least you characterized my writing as TRYING to be a “thoughtful, serious criticism.”

    I also liked the characterization of being a “self-appointed champion” of EBM. So far I’ve been wildly unsuccessful in getting anyone else to appoint me to any role within the EBM community, so self-appointment is my only option. I think of self-appointment as a referral from the only authority who truly understands what is going on.

    I do appreciate your comments (seriously!) and will try better to make my writing more thoughtful and serious. It is not easy to write well.

    In particular, I am as harshly critical of the hierarchy of evidence as anyone. I see this as something that will self-correct over time, and I see people within EBM working both formally and informally to replace the rigid hierarchy with something that places each research study in context. I’m staying with EBM because I believe that people who practice EBM thoughtfully do consider mechanisms carefully. That includes the Cochrane Collaboration.

    Is that naivete and denial? We can each accumulate dueling anecdotes of when EBM proponents get it right or when they get it wrong, but I doubt that there will ever be any solid empirical evidence to adjudicate the controversy. Without such evidence, we’ll be forever stuck accusing the other side of being too naive or too cynical.

    You see EBM as being wrong often enough that you see value in creating a new label, SBM. I see SBM as being that portion of EBM that is being done thoughtfully and carefully, and don’t see the need for a new label.

    There’s a group trying to replace the term “evidence based medicine” with “value based medicine” and I see the same problems here. In my experience, people who practice EBM thoughtfully do incorporate patient values into the equation, but others want to create a new label that emphasizes something they see lacking overall in the term “evidence based medicine.”

    But I’m still confused about the Bayesian argument you are making on this site. I can imagine one Bayesian placing randomized trials at the top of the hierarchy of evidence and I can imaging another Bayesian rejecting any research that requires going “against huge swaths of science that has been well-characterized for centuries.” I can even imagine a Bayesian having “a bit of a soft spot for the ol’ woo.” In each case, the Bayesians would incorporate their (possibly wrong-headed) beliefs into their prior distribution.

    I see the argument about Bayesian versus p-values as orthogonal to the arguments about SBM versus EBM. Am I missing something?

    Steve Simon, http://www.pmean.com

  17. David Gorski says:

    We actually agree on a lot of things–more things than perhaps we disagree on. My harping on the naivete issue was intended to point out that I used to think very similar to the way you do–until my face was mashed repeatedly into the problems of EBM when it comes to CAM. You remind me of me three or four years ago. It was meant to point out that I know where you’re coming from. Ditto the part about being in denial. Obviously it failed. Oh, well.

    That being said, I see no evidence, at least not if Cochrane Reviews are any indication, that the Cochrane Collaboration is taking that “thoughtful consideration of mechanisms” you tout and moving it from just consideration into actual incorporation into their systematic reviews. Cochrane still values RCTs above all other forms of evidence and, when doing systematic reviews of homeopathy, concludes that “more study is needed.” Be that as it may, I’d love to be made aware of specifics. Perhaps you could provide a couple of examples that illustrate your point, in particular of physicians within the EBM thought leadership (i.e, in the Cochrane Collaborative, at Duke University, or at the CEBM, for example) working to “replace the rigid [EBM] hierarchy with something that places each research study in context.” My tendency towards snarkiness and bombast aside, I never claimed to have the be-all and end-all answer. Perhaps I’m hopelessly out of touch, not having encountered the Cochrane Collaborative making a serious attempt to incorporate the thoughtful consideration of mechanisms into their reviews. I would be happy to be shown to be mistaken on that score. Perhaps I’ve also never come across EBM leaders working to replace the rigid EBM hierarchy with something that places each study in context. I would be happy to be shown to be in error on that score, too.

    P.S. By the way, that “self-proclaimed champions of evidence-based medicine” bit was not referring to you, as should be clear in context and even clearer if you click on the link in that passage.

  18. rork says:

    Forgive further skepticism about how to obtain priors for public decision making, since the answers weren’t easy to spot.

    I am a card carrying bayesian of the fist-fighting kind (since the opposition notoriously won’t take up calls for gambling to keep score about who’s theory is better), which requires my prior to obtain “personal probabilities” and make any kind of decision. It is how people (should!) make decisions, but that is individual people. They can still reach smart or stupid decisions depending on stupidity of their prior – the theory does not save them there. They are merely saved from incoherence in the technical sense, but never saved from other forms of “crazy”. To restate, I am not arguing in favor of frequentist methods, nor think that priors are bad, so beating those views up does not prove a thing for me, except that “we perceive a problem”, which I agree with.

    “it’s only necessary to find out how any prior—skeptical or generous—will be altered by the data of this particular experiment”. That’s just determining the bayes factor, and that doesn’t solve where the priors come from at all – it instead avoids the question. Perhaps that observation is annoying. It annoys me. Group decision problems have been hard questions since I was young, and that’s been a long time.
    I think it may have to go something like accepting the priors from a group of “experts” and doing something with that, but I think that may bring howls from certain quarters, and I am not sure I’ve ever seen it proposed in any formal terms, and it might be terribly messy to implement.

    I very much agree there is a problem, and that we can talk and write about what the problem is and point to failures, but I don’t agree anyone has solved the problem, though they may write like they have, or think they have. It can be argued that in certain examples most people’s priors should be so low that the evidence hardly overrides it, but those are just examples and most people, and do not tell me how to make general operational methods, which is what I imagined I wanted. Perhaps I just failed to review sufficiently.

  19. mark says:

    I find it very hard to understand the difference between SBM and EBM, despite being a long time reader and having read many of the posts on the topic. Some time could you critique the 1996 BMJ editorial by Sackett to show what the difference is between his views on EBM and SBM? I can’t find one on the website.

    Can I also ask what your interpretation would be if several large RCTs that were well conducted and adequately powered showed clear evidence of a clinically relevant effect that was a) scientifically implausible or b) for which there was no scientific evidence?

  20. splicer says:

    “Personally, I suspect that the originators of EBM…”

    Excellent explanation clarifying SBM and EBM . Here is a link from a 2006 article in Businessweek about Dr. David Eddy who apparently coined the EBM moniker. He seems to have a commercial venture that is doing mathematical health care modeling using RCTs.

    http://www.businessweek.com/magazine/content/06_22/b3986001.htm

  21. ConspicuousCarl says:

    > Scotton 08 Nov 2010 at 9:38 am
    >
    > I actually disagree with the analogy of homeopathy being
    > like a perpetual motion machine. [....] Homeopathy is
    > more like taking a perpetual motion machine and stipulating
    > that energy may be extracted from it indefinitely.

    Indeed, homeopathy is more like a scam “energy machine” (More recently, these scams have picked up the word-theft strategy and misused “zero point energy” as their new name).

    Just a few years ago, some Irish (I think) gang made the same old claim, and the most shocking thing happened: a bunch of reporters actually showed up for it.

  22. pmoran says:

    I consult Cochrane a lot and see no indication that it takes the plausibility of treatment methods into account.

    I have copied below the final summaries of several reviews of homeopathic methods demonstrating this.

    OTOH, I am not sure what we are arguing about if Simon also sees the need for the replacement of a “rigid hierarchy with something that places each research study in context.”

    Is that not what we want? Isn’t “prior plausibility” merely a technical way of describing the application of “all relevant evidence” to any medical question?

    It is not even relevant to most mainstream clinical research. Then again, many clinical questions do not require any formal research at all e.g. “should complete bowel obstruction be relieved”? How can any general rules apply?

    The Cochrane summaries –

    There is currently little evidence for the efficacy of homeopathy for the treatment of ADHD. Development of optimal treatment protocols is recommended prior to further randomised controlled trials being undertaken.

    There is not enough evidence to reliably assess the possible role of homeopathy in asthma. As well as randomised trials, there is a need for observational data to document the different methods of homeopathic prescribing and how patients respond. This will help to establish to what extent people respond to a ‘package of care’ rather than the homeopathic intervention alone.

    In view of the absence of evidence it is not possible to comment on the use of homeopathy in treating dementia. The extent of homeopathic prescribing for people with dementia is not clear and so it is difficult to comment on the importance of conducting trials in this area.

  23. Oh boy. Too much material to keep going on a comments thread, for me, anyway. I’ll post something on Friday that responds to S. Simon, mark, and maybe others. In the meantime, please know that every point brought up here has been discussed on SBM. For example, mark may look at this post, in particular the 3-4 paragraphs beginning with “It wasn’t supposed to be like this,” for a quotation from Sackett’s 1996 BMJ editorial that is pertinent to this issue. The “one” in the phrase “at least one insisted…,” by the way, was Dr. Gorski—as he has reported here.

  24. pmoran says:

    Damn! The gap after “absence of evidence — ” should contain in parenthesis “i.e. no studies meeting the searech criteria — PJM”.

  25. BillyJoe says:

    ““Penultimate” means “second-to-last.” As in, “When pronouncing ‘Tanzania,’ emphasis is on the penultimate syllable.””

    It was just a slip of the…um…”pen”.

  26. Hey there BillyJoe! We’ve been missing you.

  27. Scott says:

    It sounds to me like the key question is what happens when RCTs and basic science disagree. EBM apparently says that the RCTs trump the analysis; one RCT of homeopathy showing a statistically significant effect is enough to render the basic science arguing against it irrelevant. SBM tries to consider the relative strength of the two lines of evidence; pile up enough basic science against equivocal RCTs and the basic science wins.

    Would that be a fair description?

  28. mark says:

    Thanks for your reply Kimball.

    The problem I have distinguishing SBM from EBM is that how Sackett described EBM does seem to emphasize that basic science has an important role in EBM.

    “Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice…
    By best available external clinical evidence we mean clinically relevant research, often from the basic sciences of medicine, but especially from patient centred clinical research into the accuracy and precision of diagnostic tests (including the clinical examination), the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens….”

  29. Chris says:

    From the first link on post modernism: “A tulip bulb is a rhizhome.”

    AAArgh!!! I’ll get around to reading the rest. But, please, please, Mr. Simon fix that grievous mistake. I know it is a little thing, but a tulip bulb does have a center, a form, and direction and is completely different from a rhizome. Change it to an iris rhizome, or ginger rhizome (well, there is a type of bulb iris, completely different flower). Please. More people know what ginger root looks like. At least more than those who know what a tulip bulb looks like (kind of like a tapered onion, it even has layers).

  30. ConspicuousCarl says:

    > pmoranon 08 Nov 2010 at 3:21 pm
    > Isn’t “prior plausibility” merely a technical way of
    > describing the application of “all relevant evidence”
    > to any medical question?

    I was worried about sounding like a blunt reductionist by asking if today’s science isn’t just yesterday’s evidence, but I guess I am not the only one thinking that.

    However, this is an argument of wording. I share the concern that blind evidence-seeking needs to be tempered with plausibility. If someone has a ridiculous claim, it is OK to dismiss it as nuts unless they are willing to do the work of proving it on their own. And yet it seems like any idiot can propose nonsense, and federal money goes into researching it.

    Richard Dawkins asks “Why are unicorns hollow?” to show, as he says, that not every question deserves an answer (or research grants). The unicorn question makes no less sense than asking “why does magic water cure cancer?”, and yet the second question has sucked up real money.

    But, while it is easy to look at the farthest reaches of stupidity and reject homeopathy on basic principles, it is not so obvious how we make such decisions when a hypothesis is somewhat closer to reality. So I am not sure that I like the phrase “science-based medicine”, but I agree with the quest to improve our description of how we make these decisions in the face of limited time and money.

  31. mark on the role of basic sciences in EBM:
    “By best available external clinical evidence we mean clinically relevant research, often from the basic sciences of medicine, but especially from patient centred clinical research into the accuracy and precision of diagnostic tests (including the clinical examination), the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens….”

    That’s the only explicit reference to basic medical science in your citation. The “basic sciences of medicine” are clearly secondary to RCTs. That is, if RCTs exist, use them; otherwise you’ll have to make do with basic medical science. Seems pretty straightforward to me.

    Based on math, physics, chemistry and biology, we know that homeopathy can’t work. We don’t need any clinical evidence to know that it doesn’t work. However, the passage you quote says that only CLINICAL expertise combined with CLINICAL medical research counts. Patient-centred research is what matters.

    This is fine if patient-centred RCTs are never conducted in the absence of a medical-science-based reason to do so. In practice, politics drives a lot of RCTs. In this situation an RCT is as ungenerative as a loveless marriage.

    In the case of clinical trials of homeopathy: if the hypothesis is not supported, that doesn’t tell us anything new. We knew it was unsupportable. If the hypothesis is supported with a certain probability, that doesn’t tell us anything either because a certain percentage of trials will always be positive and some bias is extremely difficult to avoid. However, your citation tells us that the patient-centred clinical trial is more important than any other source of information.

  32. qetzal says:

    Almost all of us agree that EBM should take all the relevant science into account. However, it seems obvious that in practice, it often doesn’t. pmoran’s examples for homeopathy make that quite clear, IMO. That’s the perfect case where very weak positive data from the clinic is contradicted by very, very strong negative data from basic science. Yet none of the Cochrane summaries acknowledge that. In light of that, I don’t see how supporters of EBM can reasonably argue that basic science is getting the weighting that it should.

    However, I do agree with Stephen Simon that making appropriate improvements under the EBM label may be much easier than gaining support for the SBM label, even if the practical guidelines are identical.

  33. David Gorski says:

    OTOH, I am not sure what we are arguing about if Simon also sees the need for the replacement of a “rigid hierarchy with something that places each research study in context.”

    Is that not what we want? Isn’t “prior plausibility” merely a technical way of describing the application of “all relevant evidence” to any medical question?

    Thank you, Peter. That’s exactly what I was thinking after I read Simon’s response. What are we arguing about? Why I didn’t articulate it as clearly as you, I don’t know. In any case, you’re right. The bottom line appears to be that Stephen Simon clearly agrees that there needs to be a rejiggering or replacement of the rigorous hierarchy that currently rules EBM to provide, as he puts it, “context” to clinical trials data. That’s exactly what we have said time and time again; only we argue that basic scientific plausibility needs to be taken into account. So I have a hard time figuring out what, exactly, his complaint about the concept of SBM is, other than that he somehow thinks EBM doesn’t need to be renamed and can’t seem to believe that EBM isn’t already SBM because he can’t seem to accept that EBM proponents don’t already consider basic science adequately when constructing the systematic literature reviews and meta-analyses viewed as the highest form of evidence in the EBM hierarchies. Unfortunately, systematic reviews on homeopathy, particularly the Cochrane reviews, blow that assumption out of the water.

    I also appreciate your producing examples from the Cochrane database itself. I was going to do that next, but you saved me the trouble.

  34. However, I do agree with Stephen Simon that making appropriate improvements under the EBM label may be much easier than gaining support for the SBM label, even if the practical guidelines are identical.

    Oh, I do, too. Please don’t think that our project is about changing the name; we chose “science-based medicine” because “EBM” had already been taken and, in our view, was inaccurate in that it didn’t consider all the evidence. I favored “knowledge-based medicine,” but it was Steve’s bat, so…

    If and when EBM does consider all the evidence in a sensible way, “EBM” will be the best, accurate term for what we are now calling “SBM.” See the, er, penultimate sentence of the post here.

  35. David Gorski says:

    I hate to say “Me, too,” but “Me, too.” I’m not dogmatic about the name. The problem is that the name “evidence-based medicine” has already been taken and already become a “brand,” so to speak. I agree with Steve that it was therefore necessary to come up with an alternate moniker to distinguish what we propose from EBM in its current form, and SBM was the best of the alternatives we discussed. If we could somehow co-opt EBM to become much more like SBM while keeping the name EBM, I, for one, would be perfectly happy.

    Go back and read the last paragraph of my post if you don’t believe me. I stole some of its wording directly from Kimball, but added my own inimitable take on the question. :-)

  36. mark says:

    @ Alison Cummins

    He says basic science is a component of relevant research in EBM- what more do you want? He wasn’t writing about applying EBM to alternative medicine.

    I agree with him that high quality, well conducted, adequately powered RCTs with clinically relevant endpoints, and systematic reviews of such trials should guide clinical practice.

    I agree with you that homeopathy trials are pointless but I don’t see that applying EBM and considering relevant research precludes that conclusion. What I would struggle with is if high quality, well conducted, adequately powered RCTs were independently carried out and showed consistent evidence of a clinically relevant effect.

  37. daedalus2u says:

    The prohibition of research on treatments of negligible prior plausibility only applies to human trials and is due to ethics of human trials. Performing experiments on humans with treatments of negligible prior plausibility doesn’t just make you guilty of not doing SBM, it makes you guilty of Crimes Against Humanity for violating the Declaration of Helsinki.

    Scott, there are two types of perpetual motion machines. Those of the first type violate the first law of thermodynamics, conservation of energy and generate energy out of nothing. That is analogous to what homeopathy does. Perpetual motion machines of the second type, only violate the second law of thermodynamics, for example by spontaneously generating a temperature difference with no work input. Work can then be generated from that temperature difference. The net effect is the conversion of heat into work, but energy is conserved so the heat source gets colder. Homeopaths probably would claim that homeopathy is a PPM of the second type because it converts the shaking and ultimately thermal energy into homeopathic treatment goodness.

    Stephen Simon, I disagree with you that patient preference is anywhere close to a leg of a stool in any kind of SBM. Patient preference is absolute in how patients allow their SBM practitioner to treat them and for what, but patient preference should never allow an SBM practitioner to administer a treatment that is not safe and effective and indicated for the patient’s conditions as the SBM practitioner understands it. There are degrees of this, depending on what ever condition the patient has. Violating this, is (in my mind) to commit a Crime Against Humanity. It is deliberately exposing someone to something that is potentially harmful for no therapeutic benefit.

    To the question of what does SBM do if RTCs and scientific plausibility conflict? So far, there is no example where that has happened. It probably can’t happen. If it did happen, SBM doesn’t throw one of them under the bus and go with the one that suits their personal preference (the way EBM does). If RTCs and scientific plausibility didn’t produce congruent results, SBM would keep looking until it found the flaws in either the basic science or the RTCs. SBM adds together all the data that bears on the issue, that includes RTCs, chemistry, physics, thermodynamics, quantum mechanics, anecdotes (aka clinical experience), etc, etc, etc. SBM doesn’t apply a filter to arbitrary take out some types of data the way that EBM does. EBM filters out everything but RTCs. That is how you get RTCs on homeopathy.

    Mark, but they won’t. Large well run RTCs and good basic research will never be in conflict. If anyone thinks they are in conflict, then they need to do more research to find out which one is wrong so we can correct either the basic science understanding, or the RTCs that were not well done if they gave the wrong answer. If they really do conflict, then we live in a world where magic works. If we really are living in a world where magic works, then we really need to know that. We aren’t going to find that out by artificially prioritizing one type of evidence over another.

    I think that the original purpose of EBM was to try and cope with the deluge of data that modern research can provide by only looking at what it “thinks” is the most reliable data, the RTC. SBM is willing to, and wants to look at every piece of data that has the slightest relevance, and apply the appropriate Bayesian weighting to that piece of data.

    Deliberately and thoughtfully applying prior scientific plausibility in SBM is essential in looking at “out of the box” treatment ideas like my ammonia oxidizing bacteria. Most researchers and clinicians confuse “prior plausibility” with “I think it might be plausible” and because they don’t know enough about NO and ammonia oxidizing bacteria reject it as quackery. SBM is constrained to not do that.

  38. mark says:

    @ daedalus2u

    “Large well run RTCs and good basic research will never be in conflict.”

    …er what about HRT and vascular disease? Results from RCTs and basic research differ all the time.

  39. qetzal says:

    mark,

    I agree with you that homeopathy trials are pointless but I don’t see that applying EBM and considering relevant research precludes that conclusion.

    I agree. But then why didn’t any of the Cochrane reviews quoted by pmoran reach that conclusion. Instead, they explicitly state more research is needed!

    That’s the point. EBM should be able to consider scientific plausibility, but in practice it appears not to.

  40. mark says:

    @ qetzal

    I agree those conclusions are absurd, and saying that more weight needs to be given to basic science in the systematic reviews may have helped in this situation, and others when the treatment is implausible.

    But what about the more usual situation when basic science has established a sound basis for an effect. What should happen if the subsequent RCTs are negative? Should the basic science be taken into account or even given equal weighting to the RCTs in forming a conclusion?

    I’m specifically thinking about vitamin D and cancer. There is strong evidence from laboratory studies that vitamin D metabolites have inhibitory effects on various cancer cell lines and promising data from some animal models. But to date, there is no robust evidence from RCTs of an effect of vitamin D supplements on cancer in clinical trials. The EBM conclusion is that vitamin D supplements have no effect on cancer. What would the SBM conclusion be? The internet is full of people already promoting vitamin D for its supposed cancer prevention properties.

  41. JMB says:

    Given the attention this site has received, I think the editors of this site are in control of what is defined as SBM. In my dinosaur viewpoint (attending medical school in the pre Sackett era), I had the impression that major charlatan debacles in US medicine gained public notice between 1900 -1910. Consequently, medical education was reformed with an approach that would emphasize the scientific methods that had proved so useful in advancing knowledge, and the dogma that scientific method would result in the best form of medicine (as opposed to chiropractic, osteopathic, homeopathic, naturopathic, etc.). So the MD degree signified an education at an institution devoted to the use of scientific method in determining the best medicine (not derived from first principles or ancient beliefs). To me, that was the original meaning of science based medicine. I think it could be traced to Koch’s postulates, or Pasteur’s experiments. British empiricism (from Francis Bacon) became the dominant scientific approach.

    When I was in training in the Pre-Sackett era, we were taught the hierarchy of evidence in journal clubs that mostly paralleled Sackett’s. Sackett’s EBM really didn’t sound new to me, it just sounded like a formalization of the approach that already existed. What did seem to be lacking in the formalization was the skepticism brought to bear on any assessment, even challenging the traditional hierarchy (although back in the day, only the attending physicians could challenge the results of RCTs). In the Cochrane collaboration (which does not solely represent EBM, but is a major organization advancing it), they seemed to be dropping the skepticism that had to be applied to even the best of studies, and seemed to be saying… you follow this formula, and if you do, we will accept the study into our archive of knowledge that will be the most pristine archive of medical science. Instead, their archive appears to be polluted with statements as noted by pmoran.

    In my own view of SBM (not the editor’s view), the basis for our knowledge needs to evolve with the rest of science. There have been developments in computer/mathematical modeling, probability theory and statistics, information theory/technology, and genetics that we need to study to incorporate them into the scientific basis of medicine. There is not an elegant formalism of SBM because it should evolve with the rest of science. Personally, I think the formalism of EBM limits the evolution of methods. We need to look at the issues that should be addressed in our scientific basis, and have an up to date selection of methods that we can apply to problems in our knowledgebase.

    In regards to the statement that,

    I see the argument about Bayesian versus p-values as orthogonal to the arguments about SBM versus EBM. Am I missing something?

    The argument about p-values and Bayes factors often diverts into theoretic arguments that are orthogonal, but the wider application of the Bayes approach to the research process, not just the results, helps us avoid getting bogged down with unnecessary testing of woo, and maintain ethical standards for human subject experimentation. I would suggest that the Bayesian approach can be applied to questions of how to organize medical research and allocate resources as well as control risk. The Bayesian approach isn’t limited to the assessment of experimental data, but can be used to manage the conduct of research. The argument isn’t simply, can an experiment with an implausible hypothesis yield a valid result, but rather how much resource should we devote to the testing of an implausible hypothesis, and is there sufficient equipoise to justify an experiment with human subjects. In a way, the Bayes approach provides a formalism to what is a reasonable EBM advocate.

    @rork

    If SBM is so great, it will be able to offer specific non-fuzzy alterations into how EBM operates

    The real conceptual problem to me of the formalism of EBM is the idea that to make a decision for a patient, we are limited to the process of finding the highest level of experimental evidence in which the experimental population was selected for the dominant characteristic of the patient, and decide based on the group averages what is best for the patient. Based on my past research in computer aided diagnosis, I would really like to see more sophisticated models of disease integrated with measured estimates of Bayes likelihood ratios so that we can individualize the risk versus benefit estimate for the patient, rather than rely on the group averages from large scale RCTs. RCTs are needed for proof of concept (or that the Bayes likelihood ratio is a causal relationship), but we need to plug back in all of those factors deliberately randomized in an RCT to give the most accurate risk versus benefit information. We also need to recognize that the experimental design has implications for a model of the process we are studying. What model is implied by the experimental design and results? Should meta-analysis consider the difference in implied models before pooling the data? That is my own brand of SBM which withered on the vine before Sackett’s editorial in BMJ. My specific alteration of the formalism of EBM, don’t stop at measures of central tendencies. Not everybody is average (in fact, nobody is).

  42. mark,

    My understanding of SBM is that for greatest credibility, each stage of research should build on the one before.

    If everything we know about basic science says that Vitamin D supplementation should not prevent cancer, then we would treat any clinical trial that did show a preventive effect with scepticism. The outcome is meaningless without support.

    If much of what we know about basic science suggests that Vitamin D supplementation could prevent cancer but this hypothesis was not supported by a well-designed, high-power clinical trial – then we know the hypothesis was wrong. There was support for it at one level that disappeared at the clinical level which is where it counts.

    In both these scenarios where the basic science and the clinical trial suggest different things, the conclusion is the same: we cannot conclude that Vitamin D supplementation prevents cancer.

    It’s not that one trumps the other; it’s that without all the pieces together, there is no valid conclusion.

  43. daedalus2u says:

    I don’t know that much about the HRT controversy. In looking, I found this.

    http://www.ncbi.nlm.nih.gov/pubmed/20060403

    Which explains it in terms that I understand, nitric oxide. ;)

    Physiology is really really complicated. Trying to manipulate physiology with external non-physiologic mechanisms (such as HRT) is always going to have side effects. There isn’t a magic formula or methodology that will always predict what those side effects are in individual patients with idiosyncratic phsyiology.

    If basic science and RTCs are in conflict, there is either a problem with the basic science, the RTCs or both. I think in the HRT example there were reasons people wanted to believe HRT was always beneficial. Big Pharma made a lot of money on it and women stayed young-looking and sexy.

    If RTCs and basic science are in conflict, SBM says “we don’t know” and continues to do more RTCs and more basic science. If RTCs and basic science are in conflict (as in homeopathy), EBM throws out the basic science and looks only at RTCs. If the RTCs are equivocal (as in homeopathy), then EBM says “we need more research”.

    Regarding what SBM says about vitamin D and cancer, not having looked into it carefully, I think SMB would conclude that vitamin D is not a “magic bullet” for cancer (which is really a great many different diseases). Vitamin D likely has pro-growth effect and anti-growth effects. Which is going to dominate is going to be an idiosyncratic physiological response from the specific type of tumor to vitamin D. Some tumors might do both at different doses, different times, or different external conditions. I don’t think that the vitamin D basic research on cancer is robust enough and shows a robust enough effect to say that it is in conflict with RTCs on vitamin D.

    When I use the term “basic research”, what I mean is everything except RTCs, not the specific research cited to provide a basis for the RTC.

  44. JMB says:

    @mark

    If you use my idea of modeling of disease process, and physiologic processes, I think SBM would lead us to favor that some but not all cancer types may respond to vitamin D prevention, but it remains to be determined what levels of vitamin D supplementation is needed, whether calcium supplements must be combined with vitamin D supplementation, and whether selection of patients based on measured levels of vitamin D will yield more consistent results in RCTs. In this case, a model of the physiologic effects of vitamin D, and measured parameters of vitamin D levels in the general population, and variability in the disease process we call cancer, can account for some of the variation noted to date in RCTs. Because of the low toxicity and low cost of the intervention, further study would be warranted in spite of initial variable results, and overall negative results in meta-analysis. So the posterior probability that we can just tell the general public to take vitamin D supplements and observe a decrease in the incidence of cancer is low. But the a priori probability that for select cancers, such as colorectal cancer, if we prescribe certain higher levels of supplementation in patients with measured vitamin D deficits, and combine it with calcium supplements, is high enough that we could observe a reduction in incidence. So further study is warranted for specific experimental designs.

  45. mark says:

    Thanks for your comments. I understand better where you are coming from. I’ve probably sounded like an EBM zealot. I’m not. Much of what is churned out under the banner of EBM seems to not meet the goals of EBM (which I do like, as expressed by Sackett in his BMJ editorial anyway). To me, many so-called EBM reviews appear to be obsessed with methodology and follow a cookbook approach without any deep understanding of the basic science or the clinical issues or the statistical issues. They often produce wacky conclusions. As a consequence, saying EBM is often enough to get people to roll their eyes.

    I usually just stick to reading on this site not commenting. Normal service will now resume.

    P.S

    @daedulus2u: I was recently a conference where a paper was presented showing that GTN produced very large and suprising (to me) increases in bone density. If it could prevent fractures as well, it would be wonderful. When they said that GTN was a NO donor, I figured you wouldn’t be surprised in the slightest ….

  46. daedalus2u says:

    GTN is not an NO donor. Many NO researchers do not appreciate that. GTN actually decreases NO levels by inducing ischemic preconditioning. In long term use it causes oxidative stress and endothelial dysfunction.

    I am aware of the research showing increased bone density with organic nitrates. Bone density is regulated by NO release. During bone strain there is movement of fluid in the porosity of bone, the shear from that fluid movement activates nitric oxide synthase and the NO it generates regulates bone density, depositing more bone mineral where there is more NO due to more bone strain (i.e. where the bone is most deformable).

    I don’t think that is a good approach to increasing bone density. GTN has other effects, it causes migraines. Migraines are pretty well established as episodes of ischemic preconditioning in the brain. Not everyone who experiences migraines experiences pain. If GTN does induce ischemic preconditioning, it could easily accelerate neurodegenerative diseases like Alzheimer’s. Certainly a state of oxidative stress and endothelial dysfunction is likely to accelerate neurodegenerative diseases.

  47. I don’t want to fill your comment page with a bunch of stuff, but I did write up a more refined commentary that acknowledged some of the poor writing in my earlier post.
    * http://www.pmean.com/10/ScienceBasedMedicinePt2.html
    It’s hard to write something that is even handed without coming across as wishy-washy. It’s also hard to make a point firmly without going overboard.

    I should note, even though there is no requirement these types of disclosures on the web, that I have some financial conflicts of interest that some of your readers might consider relevant. I think a lot of conflict of interest requirements are overblown. Witness how much trouble the Keith Olberman got in for giving (not receiving) money. I also got in trouble at one talk because I put in a plug for my book. Good grief!

    Still, it is better to disclose too much than too little, so if you read through the webpage, you’ll see that I have gotten some of my consulting income from Cleveland Chiropractic College and I’m getting some support from an NCCAM grant. I should have mentioned this earlier, but I honestly didn’t think about it until I was writing the second webpage about this.

    I wish I had more financial conflicts like this to write up. It’s better to be rich and conflicted than poor and pure. So if you know someone who wants to pitch some woo, and needs statistical help to do this, send them my way. Seriously, I think that a lot of CAM research would be better if there was more input from professional statisticians. Anyone who is serious about doing good research deserves help.

    Steve Simon, http://www.pmean.com

    P.S. So the tulip bulb is not a rhizome? It’s fixed now. Please let me know about any other boneheaded errors in my writing.

  48. Chris says:

    Thank you, Mr. Simon. Some of us gardeners are just a bit crazy, which is something you would notice if you go to a Flower and Garden Show, or just a specialty gardening meeting (I belong to an edible gardening group).

    I do appreciate you making statistics clearer for us (I thought I knew some, but after taking a probability class from the statistics department, I realized that the one I took in the College of Engineering could have been called “Prob and Stats for Dummies”!).

  49. windriven says:

    Mr. Simon, I would argue that it is imperative to disclose COIs whether on television, in print or on the internet. Readers should have the information necessary to weigh the writer’s impartiality on their own.

    You cite the instance of Keith Olbermann. Mr. Olbermann has a contract that demands the appearance of impartiality by banning partisan political contributions. I personally think that Mr. Olbermann should be able to contribute to anyone he chooses. But I also believe that his contributions should be publicized as they are indicative of biases that may color his reportage.

    It is a matter of recognizing a bias rather than obscuring it.

  50. Always Curious says:

    Just as RCTs have their weaknesses, so too does basic research. Basic research tends to center around very low-level individual relationships that can be tediously boring for anyone outside the field. In order to control experiments properly, sacrifices are made that later turn out to be problematic.

    Cell culture is an excellent example. Cultured cells are removed from their natural surroundings & grown in artificial conditions. However, replicating “normal” or “close-enough” physiologic conditions for studies to be meaningful is challenging (major understatement). So to me, it is unsurprising that basic science can demonstrate relationships (Vitamin D prevents cancer) in cell culture that don’t turn out to work out later.

    So this is where we need the basic researcher needs RCTs: to verify that the details and mechanisms they have faithfully gathered under more ideal conditions actually contribute to the health of the entire person (not just to a population of cells living in an incubator someplace).

    Likewise, sometimes observations in RCTs that start out as unexplained results later become validated by basic researchers suddenly keen to figuring out the whys & hows of those observations.

    I tend to imagine this process as a series of lens for a camera where basic research is zoomed in to near maximum magnification and large RCTs are wide-angle panoramas. Both views will be missing certain elements. To decide if those elements are relevant, one needs to scan through intermediate lens carefully before reaching a decision.

  51. Dr Benway says:

    I should read all these comments before I say anything, but I don’t want to lose my thought.

    You see EBM as being wrong often enough that you see value in creating a new label, SBM. I see SBM as being that portion of EBM that is being done thoughtfully and carefully, and don’t see the need for a new label.

    I use EBM and SBM interchangeable, except when I’m confronted with naive empiricism. Then I point out that frequentest approaches are just a special case of a Bayesian analysis –analogous perhaps to Newtonian mechanics as a special case of quantum mechanics.

    If we agree on Bayes we are on the same team whatever we call it. We’re both going to insist our government stop funding tooth-fairy science –e.g., controlled trials of homeopathy or energy healing.

    Arguing ruffles feathers, but nobody cares about that, eh? Ego is cancer. Ideas are bullet proof.

  52. Dr Benway says:

    mark:

    I agree with you that homeopathy trials are pointless but I don’t see that applying EBM and considering relevant research precludes that conclusion. What I would struggle with is if high quality, well conducted, adequately powered RCTs were independently carried out and showed consistent evidence of a clinically relevant effect.

    But you shouldn’t ever have to struggle with any good RCTs showing homeopathy has some effect because such trials ought never happen.

    Consider the mountain of sequential RCTs we would need –each one showing a positive effect for homeopathy– to outweigh all the evidence in the published, peer reviewed physics and chemistry literature to date. That literature is relevant because none of it makes any sense if homeopathy is true.

    Remember, the basic scientists are macho tough guys who laugh at anything with a p value above 1/10,000. Doctors use a standard of p<1/20, which makes them crazy moonbats by comparison. So we will need 3-4 positive medical studies just to equal the evidential power of *one* published study in physics or chemistry.

    Given that there are 19.7 bajillion papers published in the peer reviewed physics and chemistry literature, we will need 19.7 * 4 bajillion studies of homeopathy all showing a positive result before we would be justified in saying, "homeopathy works."

    Before you grab you calculator, you might wonder why the NIH isn't demanding that the NCCAM set itself on fire yesterday.

  53. Dr Benway says:

    In my case, the answer to the NCCAM issue would be around a hundred million. For a hundred million I could be Dr. Josephine Briggs, I will confess.

  54. daedalus2u says:

    I couldn’t be, even for a billion.

    I value living in the reality based community too much, and I can’t fake it.

    This is actually relevant because I really do have the answer to CAM, a treatment that will invoke the placebo effect pharmacologically via the ammonia oxidizing bacteria I am working with. My US patent on it just issued.

    My bacteria will beat any and every placebo. They will beat many drugs too. Especially for conditions that are characterized by low NO/NOx.

  55. Dr Benway says:

    Sounds good, daedelus2u.

    And now you know of *two* ways to get me on board as a promoter of your new invention.

    :)

  56. Dr Benway says:

    It’s also hard to make a point firmly without going overboard.

    Why, overboard is my favorite way to make a point. Bring on the emotional drama!

    But I have a problem with falling asleep too much plus ADD. So I need to crank my brain up a bit, else I can’t follow or remember what people are saying.

Comments are closed.