Articles

Lies, damned lies, and…science-based medicine?

I realize that in the question-and-answer session after my talk at the Lorne Trottier Public Science Symposium a week ago I suggested in response to a man named Leon Maliniak, who monopolized the first part of what was already a too-brief Q&A session by expounding on the supposed genius of Royal Rife, that I would be doing a post about the Rife Machine soon. And so I probably will; such a post is long overdue at this blog, and I’m surprised that no one’s done one after nearly three years. However, as I arrived back home in the Detroit area Tuesday evening, I was greeted by an article that, I believe, requires a timely response. (No, it wasn’t this article, although responding to it might be amusing even though it’s a rant against me based on a post that is two and a half years old.) Rather, this time around, the article is in the most recent issue of The Atlantic and on the surface appears to be yet another indictment of science-based medicine, this time in the form of a hagiography of Greek researcher John Ioannidis. The article, trumpeted by Tara Parker-Pope, comes under the heading of “Brave Thinkers” and is entitled Lies, Damned Lies, and Medical Science. It is being promoted in news stories like this, where the story is spun as indicating that medical science is so flawed that even the cell-phone cancer data can’t be trusted:

Visit msnbc.com for breaking news, world news, and news about the economy

Let me mention two things before I delve into the meat of the article. First, these days I’m not nearly as enamored of The Atlantic as I used to be. I was a long-time subscriber (at least 20 years) until last fall, when The Atlantic published an article so egregiously bad on the H1N1 vaccine that our very own Mark Crislip decided to annotate it in his own inimitable fashion. That article was so awful that I decided not to renew my subscription; it is to my shame that I didn’t find the time to write a letter to The Atlantic explaining why. Fortunately, this article isn’t as bad (it’s a mixed bag, actually, making some good points and then undermining some of them by overreaching), although it does lay on the praise for Ioannidis and the attacks on SBM a bit thick. Be that as it may, clearly The Atlantic has developed a penchant for “brave maverick doctors” and using them to cast doubt on science-based medicine. Second, I actually happen to love John Ioannidis’ work, so much so that I’ve written about it at least twice over the last three years, including The life cycle of translational research and Does popularity lead to unreliability in scientific research?, where I introduced the topic using Ioannidis’ work. Indeed, I find nothing at all threatening to me as an advocate of science-based medicine in Ioannidis’ two most famous papers, Contradicted and Initially Stronger Effects in Highly Cited Clinical Research and Why Most Published Research Findings Are False. The conclusions of these papers to me are akin to concluding that water is wet and everybody dies. It is, however, quite good that Ioannidis is there to spell out these difficulties with SBM, because he tries to keep us honest.

Unfortunately, both papers are frequently wielded like a shibboleth by advocates of alternative medicine against science-based medicine (SBM) as “evidence” that it is corrupt and defective to the very core and that therefore their woo is at least on equal footing with SBM. Ioannidis has formalized the study of problems with the application of science to medicine that most physicians intuitively sense but have not ever really thought about in a rigorous, systematic fashion. Contrast this to so-called “complementary and alternative medicine” (i.e., CAM), where you will never see such a questioning of the methodology and evidence base behind it (mainly because its methodology is primarily anecdotal and its evidence base nonexistent or fatally flawed) and most practitioners never change their practice as a result of any research, and you’ll see my point.

Right from the beginning, the perspective of the author David H. Freedman is clear. I first note the title of the article (Lies, Damned Lies, and Medical Science) is intentionally and unnecessarily inflammatory. On the other hand, I suppose that entitling it something like “Why science-based medicine is really complicated and most medical studies ultimately turn out to be wrong” wouldn’t have been as eye-catching. Even Ioannidis restrained himself more when he entitled his PLoS review an almost as exaggerated Why Most Published Research Findings Are False, which has made it laughably easy for cranks to the misuse and abuse of his article. My annoyance at the title and general tone of Freedman’s article notwithstanding, coupled with the sorts of news coverage it’s getting notwithstanding, there are still important messages in Freedman’s article worth considering, if you get past the spin, which begins very early in describing Ioannidis and his team thusly:

Last spring, I sat in on one of the team’s weekly meetings on the medical school’s campus, which is plunked crazily across a series of sharp hills. The building in which we met, like most at the school, had the look of a barracks and was festooned with political graffiti. But the group convened in a spacious conference room that would have been at home at a Silicon Valley start-up. Sprawled around a large table were Tatsioni and eight other youngish Greek researchers and physicians who, in contrast to the pasty younger staff frequently seen in U.S. hospitals, looked like the casually glamorous cast of a television medical drama. The professor, a dapper and soft-spoken man named John Ioannidis, loosely presided.

I’m guessing the only reason Freedman didn’t liken this team to Dr. Greg House and his minions is because, unlike Dr. House, Ioannidis is dapper and soft-spoken, although like Dr. House’s team apparently Ioannidis’ team is full of good-looking young doctors. After describing how Ioannidis delved into the medical literature and was shocked by the number of seemingly important and significant published findings that were later reversed in subsequent studies, Freedman boils down the what I consider to be the two most important messages that derive from Ioannidis’ work:

This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”

Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research. To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premises—after all, simply re-proving someone else’s results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.

Of course, we’ve discussed the problems of publication bias before multiple times right here on SBM. Contrary to the pharma conspiracy-mongering of many CAM advocates, more commonly the reason for bias in the medical literature is what is described above: Simply confirming previously published results is not nearly as interesting as publishing something new and provocative. Scientists know it; journal editors know it. In fact, this is far more likely a problem than the fear of undermining the work of respected colleagues, although I have little doubt that that fear is sometimes operative. The reason is, again, because novel and controversial findings are more interesting and therefore more attractive to publish. A young investigator doesn’t make a name for himself by simply agreeing with respected colleagues. He makes a name for himself by carving out a niche and even more so if he shows that commonly accepted science has been wrong. Indeed, I would argue that this is the very reason that comparative effectiveness research (CER) is given such short shrift in the medical literature, so much so that the government has decided to encourage it in the latest health insurance reform bill. CER is nothing more than comparing already existing and validated therapies head-to-head against each other to see which is more effective. To most scientists, nothing could be more boring, no matter how important CER actually is. Until recently, doing CER was a good way to bury a medical academic career in the backwaters. Hopefully, that will change, but to my mind the very problems Ioannidis points out are part of the reason why CER has had such rough sledding in achieving respectability.

More importantly, what Freedman appears (at least to me) to portray as a serious, nigh unfixable problem in the medical research that undergirds SBM is actually its greatest strength: it changes with the evidence. Yes, there is a bias towards publishing striking new findings and not publishing (or at least not publishing in highly prestigious journals) less striking or negative findings. This has been a well-known bias that’s been bemoaned for decades; indeed, I remember learning about it in medical school, and you don’t want to know how long ago I went to medical school.

Even so, Freedman inadvertently echoes a message that we at SBM have discussed many times, namely that high quality evidence is essential. In the article, Freedman points out that 80% of nonrandomized trials turn out to be wrong, as are “25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Big surprise, right? Less rigorous designs produce false positives more often! Also remember, in an absolutely ideal world with a perfectly designed randomized clinical trial (RCT), by choosing p<0.05 as the cutoff for statistical significance, we would expect that at least 5% of RCTs will be wrong by random chance alone. Add type II errors to that and the number is expected to be even higher, again, just by random chance alone. When you consider these facts, then having only 10% of large randomized trials turn out to be incorrect is actually not too bad at all. Even if only 25% of all randomized trials turn out to be wrong, that isn’t all that bad either; these include smaller trials. After all, the real world is messy; trials are never perfect, nor is their analysis. The real messages should be that lesser quality trials that are unrandomized are highly unreliable and that even randomized trials should be replicated if at all possible. Unfortunately, resources are such that such trials can’t always be replicated or expanded upon, which means that we as scientists need to do our damnedest to work on improving the quality of such trials. Also, don’t forget that the probability of a trial being wrong increases as the implausibility of the hypothesis being tested increases, as Steve Novella and Alex Tabarrok have pointed out in discussing Ioannidis’ results. Unfortunately, with the rise of CAM, more and more studies are being done on highly implausible hypotheses, which will make the problem of false-positive studies even worse. Is this contributing to the problem overall? I don’t know, but that would be a really interesting hypothesis for Ioannidis and his group to study, don’t you think?

Another important lesson from Ioannidis’ work cited by Freedman is that hard outcomes are much more important than soft outcomes in medical studies. For example, death is the hardest outcome of all. If a treatment for a chronic condition is going to claim benefit, it behooves researchers to demonstrate that it has a measurable effect on mortality. I discussed this issue a bit in the context of the controversy over Avastin and breast cancer, where the RCTs used to justify approving Avastin for use against stage IV breast cancer found an effect on disease-free survival but not overall survival. However, this issue is not important just in cancer trials, but in any trial for an intervention that is being used to reduce mortality. “Softer” outcomes, be they disease-free survival, reductions in blood lipid levels, reductions in blood pressure, or whatever, are always easier to demonstrate than decreased mortality.

Unfortunately, one thing that comes through in Freedman’s article is something similar to other work I’ve seen from him. For instance, when Freedman wrote about Andrew Wakefield back in May, he got it so wrong that he was not even wrong when he described The Real Lesson of the Vaccines-Cause-Autism Debacle. To him the discovery of Andrew Wakefield’s malfeasance is as nothing compared to what he sees as the corruption and level of error present in the current medical literature. In other words, Freedman presented Wakefield not as a pseudoscience maven, an aberration, someone outside the system who somehow managed to get his pseudoscience published in a respectable medical journal and thereby caused enormous damage to vaccination programs in the U.K. and beyond. Oh, no. To Freedman, Wakefield is representative of the system. One wonders, given how much he distrusts the medical literature, Freedman actually knew Wakefield was wrong. After all, all the studies that refute Wakefield presumably suffer from the same intractable problems that Freedman sees in all medical literature. In any case, perhaps this apparent view explains why, while Freedman gets some things right in his profile of Ioannidis, he gets one thing enormously wrong:

Ioannidis initially thought the community might come out fighting. Instead, it seemed relieved, as if it had been guiltily waiting for someone to blow the whistle, and eager to hear more. David Gorski, a surgeon and researcher at Detroit’s Barbara Ann Karmanos Cancer Institute, noted in his prominent medical blog that when he presented Ioannidis’s paper on highly cited research at a professional meeting, “not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.” Ioannidis offers a theory for the relatively calm reception. “I think that people didn’t feel I was only trying to provoke them, because I showed that it was a community problem, instead of pointing fingers at individual examples of bad research,” he says. In a sense, he gave scientists an opportunity to cluck about the wrongness without having to acknowledge that they themselves succumb to it—it was something everyone else did.

To say that Ioannidis’s work has been embraced would be an understatement. His PLoS Medicine paper is the most downloaded in the journal’s history, and it’s not even Ioannidis’s most-cited work—that would be a paper he published in Nature Genetics on the problems with gene-link studies. Other researchers are eager to work with him: he has published papers with 1,328 different co-authors at 538 institutions in 43 countries, he says. Last year he received, by his estimate, invitations to speak at 1,000 conferences and institutions around the world, and he was accepting an average of about five invitations a month until a case last year of excessive-travel-induced vertigo led him to cut back.

Yes, my ego can’t resist mentioning that I was quoted in Freedman’s article. My ego also can’t help but be irritated that Freedman gets it completely wrong in how he spins my anecdote. Instead of the interpretation I put on it, namely that physicians are aware of the problems in the medical literature described by Ioannidis and take such information into account when interpreting studies (i.e., that Ioannidis’ work is simply reinforcement of what they know or suspect anyway), Freedman instead interprets my colleagues’ reaction to Ioannidis as “an opportunity to cluck about the wrongness without having to acknowledge that they themselves succumb to it—it was something everyone else did.” I suppose it’s possible that there is a grain of truth in that — but only a small grain. In reality, at least from my observations, the reason that scientists and skeptics have not only refrained from attacking Ioannidis but in actuality have embraced him and his findings of deficiencies in how we do clinical trials is for the right reasons. We want to be better, and we are not afraid of criticism. Try, for instance, to imagine an Ioannidis in the world of CAM. Pretty hard, isn’t it? Then picture how a CAM-Ioannidis would be received by CAM practitioners? I bet you can’t imagine that they would shower him with praise, publications in their best journals, and far more invitations to speak at prestigious medical conferences than one person could ever possibly accept.

Yet that’s how science-based practitioners have received John Ioannidis.

In the end, Ioannidis has a message that is more about how little the general public understands the nature of science than it is about the flaws in SBM:

We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.

“Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”

We should indeed. On the other hand, those of us in the trenches with individual patients don’t have the luxury of ignoring many studies that conflict (as Ioannidis suggests elsewhere in the article). Moreover, it is science that gives us our authority with patients. If patients lose trust in science, then there is little reason not to go to a homeopath. Consequently, we need to do the best we can with what exists. Nor does Ioannidis’ work mean that SBM is so hopelessly flawed that we might as well all throw up our hands and become reiki masters, which is what Freedman seems to be implying. SBM is our tool to bring the best existing care to our patients, and it is important that we know the limitations of this tool. Contrary to what CAM advocates claim, there currently is no better tool. If there were, and it could be demonstrated conclusively to be superior, I’d happily switch to using it.

To paraphrase Winston Churchill’s famous speech, many forms of medicine have been tried and will be tried in this world of sin and woe. No one, certainly not those of us at SBM, pretends that SBM is perfect or all-wise. Indeed, it has been said (mainly by me) that SBM is the worst form of medicine except all those other forms that have been tried from time to time. I add to this my own little challenge: Got a better system than SBM? Show me! Prove that it’s better! In the meantime, we should be grateful to John Ioannidis for exposing defects and problems with our system while at the same time expressing irritation at people like Freedman for overhyping them.

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (114) ↓

114 thoughts on “Lies, damned lies, and…science-based medicine?

  1. psychability says:

    Your review of the article should be posted as an addendum in the magazine. One question:

    “Softer” outcomes, be they disease-free survival, reductions in blood lipid levels, reductions in blood pressure, or whatever, are always easier to demonstrate than decreased mortality.

    Not sure I understand how mortality is more difficult to demonstrate in studies than soft outcomes.

  2. David – We don’t have to imagine an Ioaniddis in the CAM world. That person is Edzard Ernst. And we have seen how he has been received – as a traitor with an axe to grind.

  3. Angora Rabbit says:

    Psychability, I can understand your confusion. “Hard” and “soft” don’t refer in this context to degree of difficulty, but rather to the statistical variance in the measurement. Death is as binary as it gets, either one is or isn’t, so it’s a “hard” number. A fasting blood lipid measurement in an individual will vary from day to day, so it is a “softer” number.

    I still remember with fondness my X-ray crystallography prof who dismissed all other biochemistry as “soft.” :)

    Nice review, David.

  4. David Gorski says:

    David – We don’t have to imagine an Ioaniddis in the CAM world. That person is Edzard Ernst. And we have seen how he has been received – as a traitor with an axe to grind.

    D’oh! I should have thought of that–so much so that I might just go back and add a passage about Edzard Ernst when I get a chance. :-(

  5. I follow online discussions with massage therapists and chiropractors, keeping tabs on that community out of professional/skeptical interest. Ernst is vilified like clockwork, like a tic. His name gets brought up just for the sake of sneering at him. And if you cite his work? Oh my. Just to give you an example of how far that can go …

    When I was accused of being unprofessional by the College of Massage Therapists of BC, one of the only specific examples of my alleged misconduct that they ever offered was that I had cited Edzard Ernst on my website.

    Facepalm.

  6. windriven says:

    In mass media, if it bleeds it leads. Sensationalized spins such as the Atlantic piece excite readers’ interest. Five thousand words on the neurologic effects of propofol on gestating rat pups, not so much.

  7. tuck says:

    Very good response. They key point is that even if science is proven wrong 90% of the time, that 10% that it’s right, and you know it’s right, is an infinite number higher than the untested alternative.

  8. qetzal says:

    psychability,

    The difficulty is in showing whether a given treatment reduces mortality. One simple reason that’s more difficult than the so-called “softer” outcomes is that it just takes longer. Patients have to be followed until a significant fraction actually die, instead of just following them until their tumor gets bigger. For disease where death usually happens quickly or not at all, that’s not such a problem. But for things like chronic heart disease, diabetes, or many cancers, it can take years or decades to determine effects on mortality.

  9. Patrick N says:

    Thank you for a great article !

  10. Th1Th2 says:

    Just watched the video clip. Fund based medicine it is.

  11. Chris – you’re getting awfully good at this.

  12. Th1Th2 says:

    Tuck,

    “They key point is that even if science is proven wrong 90% of the time, …”

    The scientists you mean.

  13. weing says:

    Excellent review. I would point out that while most medical claims are misleading is true, ALL CAM claims are misleading. The take home message of the article should be for the public to have realistic expectations from SBM.

  14. Chris says:

    I looked at Freedman’s webpage, and he looks like a journalist who has written about management. I doubt he has ever done any science, but thinks he knows about it because he read about it.

    Someone should send him a copy of Goldacre’s Bad Science. Oh, my copy is right next to me, and it opened right up to this paragraph:

    There have been an estimated fifteen million medical academic articles published so far, and 5000 journals are published each month. Many of these articles will contain contradictory claims: picking out what’s relevant — and what’s not — is a gargantuan task.

    … then it goes on about cherry picking.

    The chapter he should really read is Is Mainstream Medicine Evil, where Goldacre explains how some drug development works, and the issues of publication bias.

  15. weing says:

    I have one concern about the majority of medical claims being wrong statistics. It has to do with the methodology. Are they lumping CAM claims with the medical claims? Just curious.

  16. pmoran says:

    “Most published research findings are wrong” has become an sensationalist catchcry, misused by some, and I am not sure that I can easily forgive Ionnadis for coining the phrase.

    His own writings and research show that it is a gross over-generalisation, only made possible at all by the fact that a lot of published research is of poor methodological quality and that it is sometimes performed and interpreted against a background of bias. It is not science that gives wrong answers, it is human frailty.

    I confess that I am not statistically minded enough to follow Ionnadis’ simulations, but his own data suggests that this bald statement does not apply within mainstream clinical research.

    For example in this study -

    http://jama.ama-assn.org/cgi/content/abstract/294/2/218

    - only 7% of the results of “highly cited” (i.e. presumably good quality) medical research papers were subsequently proved to be completely wrong. Non-randomised studies performed much less well but we would predict that and give them a lower status in that hierarchy of evidence that directs our practices.

  17. Toiletman says:

    I just briefly browsed the article some days ago because it was given to me as a link on a sceptical forum. Personally, I think it is a very good article but only for a very limited audience. To broad audience ( I don’t know The Atlantic as I am not from Anglo World) however, it could make people get negetive opinions about medicine and all the snake oil vendors will jeerfully point at that article while lying that everything of his method is proven and tested (or doesn’t need to be tested because it is so obvious in their view). As I suspect that The Atlantic is not a magazine for medical professionals and very educated laymen such as me (I’ve only done the “soft” science, aka social sciences and often felt like that those sociologics professors need some waterborder as reality check to finally realise that the human mind is no tabula rasa und not every behaviour or character trait is socially constructed. I often contemplated switching to biology or pharmacology. Still might do it after graduation), who have always had an interest in medical sciences.

  18. Alexie says:

    Contradictory research findings, gaps in knowledge and heading up blind alleys are part and parcel of the scientific experience. Unfortunately, as science becomes politicised, these gaps in knowledge are being exploited. You can see it at work with evolutionary science, where reasoned disagreements between colleagues become ammunition in the war against evolution.

    Unfortunately, the more honest that scientists are about the realities of publication bias, observational bias and the like, the more their honesty is used against them.

  19. …with the rise of CAM, more and more studies are being done on highly implausible hypotheses, which will make the problem of false-positive studies even worse. Is this contributing to the problem overall? I don’t know, but that would be a really interesting hypothesis for Ioannidis and his group to study, don’t you think?

    Ioannidis may not have looked at CAM trials per se, but there’s little question that he has something important to say about them:

    …Let us suppose that in a research field there are no true findings at all to be discovered. History of science teaches us that scientific endeavor has often in the past wasted effort in fields with absolutely no yield of true scientific information, at least based on our current understanding. In such a “null field,” one would ideally expect all observed effect sizes to vary by chance around the null in the absence of bias. The extent that observed findings deviate from what is expected by chance alone would be simply a pure measure of the prevailing bias.

    And in the present: homeopathy, distant healing, Therapeutic Touch, craniosacral therapy, applied kinesiology, etc.

    @ David and Peter Moran:

    Not every mathematically sophisticated biomedical researcher agrees that the record is quite as dismal as Ioannidis has written. Steven Goodman and Sander Greenland, two other Bayesian heavy hitters, argued that Ioannidis used circular reasoning to arrive at his conclusion. See: http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0040168

    Ioannidis replied here: http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0040215

  20. JMB says:

    Thanks to Drs Gorski, Atwood and Moran for the far better discussion of Ioannidis thesis than presented in the lay press. Here’s a link to a Time magazine review of the Atlantic article,

    http://healthland.time.com/2010/10/20/a-researchers-claim-90-of-medical-research-is-wrong/

    It amazed me how this is being turned into an argument against science based medicine, when the central thesis of SBM (for doctors) is that we need to be rigorous in the skeptical approach to the interpretation of medical science literature and claims, and consider plausibility of hypotheses. The natural result of the SBM approach is that the majority of reports in medical literature will not result in a valid strategy for treatment of patients.

    From my experience in experiments using Bayes’ decision strategies, I would definitely agree with Steven Goodman and Sander Greenland that introducing a bias term in a Bayes classifier at the level of 10% would have a significant impact in the eventual decision of what percentage of experiments are valid or invalid. Ironically, the bias term is a major source of bias in Ioannidis’ results.

  21. DanaUllman says:

    I am personally pleased and even honored that the ATLANTIC chose to steal (or re-use) the same title that I used for one of my articles at the Huffingtonpost on April 30, 2010:

    http://www.huffingtonpost.com/dana-ullman/medical-research-lies-dam_b_555525.html

    Sadly, the ATLANTIC article did not go far enough in its critique of medical research AND medical practice. The common-day use of polypharmacy in medical practice is strong evidence for the paucity of “evidence based medicine” today…simply because there is just virtually no evidence to support the safety or efficacy of such practice.

    But the good news is that polypharmacy makes more money for Big Pharma (“how convenient”).

  22. Necandum says:

    A question:

    Would it help with the whole publication bias thing for the government to mandate that every single study performed within the country must be published?

    It could provide its own ‘journal’ with a publicly available database to accept all those papers that are rejected by the other journals, filtering only for some minimum level of quality rather than interest or proving the hypothesis.

    That way, all the negative studies will get a chance to see the light of day and there would presumably be less pressure to get positive results, as there is the danger that if one’s study contradicts too many others, it may attract suspicion.

  23. Harriet Hall says:

    Dana,

    We would love for polypharmacy to be evidence-based, but its impossible to test every possible combination of drugs. There aren’t enough subjects on Earth for that many experiments.
    I addressed that problem and other aspects of polypharmacy at
    http://www.sciencebasedmedicine.org/?p=173

    The good news is that despite the drawbacks of polypharmacy, we have good evidence that modern pharmaceuticals (singly and in combination) can save lives. Which Dana’s homeopathy can’t.

    Come to think of it, we don’t have any evidence for the safety and efficacy of combining homeopathic remedies with different diets or with non-homeopathic treatments. Dana’s criticism applies even more to his own field than to ours.

  24. JMB says:

    A bias term in a Bayes’ classifier of 10% doesn’t just shift 10% of research papers from valid to invalid. It moves the line drawn in the sand that defines valid or invalid, and could effectively reclassify a significant percentage of scientific articles.

  25. David Gorski says:

    Would it help with the whole publication bias thing for the government to mandate that every single study performed within the country must be published?

    In the U.S., that’s already been more or less done for clinical trials, which are registered at ClinicalTrials.gov.

  26. penglish says:

    This article is a response to a “straw man” attack on SBM.

    Such attacks are not uncommon. There’s another one been made “Marya” recently in the BMJ “doc2doc” forums: http://doc2doc.bmj.com/blogs/doctorsblog/_paternalism-of-science-based-medicine

  27. penglish says:

    And now somebody on the BMJ forums is saying we should all be panicking about Ioannidis’ analysis…

  28. Jurjen S. says:

    Quoth Dana Ullman:[blockquote]I am personally pleased and even honored that the ATLANTIC chose to steal (or re-use) the same title that I used for one of my articles at the Huffingtonpost on April 30, 2010 [...][/blockquote]You mean both you and Freedman borrowed the phrase from Mark Twain (who himself attributed it to Benjamin Disraeli). Please don’t pat yourself on the back for originality in supposedly coming up with a variation of a phrase that’s been around for well over a century, and has been used [i]very[/i] frequently during that time.

  29. Jurjen S. says:

    Damn, that’s what you get switching between UBB code and HTML; the tags above should have been written with angle brackets, not square ones. My apologies.

  30. And now somebody on the BMJ forums is saying we should all be panicking about Ioannidis’ analysis…

    Let’s see: Ioannidis’s article was published 5 years ago, and has been widely discussed by biomedical types since then (including on SBM shortly after we began). Only just now, it seems, has the lay press taken notice. Maybe that’s why we should be panicking: it illustrates just how capricious the lay press is, and like it or not we rely upon them to interpret what we do. But that’s nothing new, either.

  31. DanaUllman says:

    Harriett…my point is that 99% or so of medical practice today is NOT evidence based medicine despite your and others’ insistence that it is.

    My point also was it is now time to get off your high horse, especially because, like the commercial, your horse is going in the other direction than you are (oh well, ignorance seems to be bliss…and keep those daggers flying at CAM as a clever way to avoid looking in a mirror)…

    As for homeopathic research, you obviously have not read the body of work on homeopathy and respiratory allergies:
    Ullman, D, Frass, M. A Review of Homeopathic Research in the Treatment of Respiratory Allergies. Alternative Medicine Review. 2010:15,1:48-58. http://www.thorne.com/altmedrev/.fulltext/15/1/48.pdf

    And what % of surgical procedures are evidence based? Curious minds do want to know…

  32. David Gorski says:

    Harriett…my point is that 99% or so of medical practice today is NOT evidence based medicine despite your and others’ insistence that it is.

    More utter nonsense from Dana. (Yawn.)

  33. weing says:

    I still think they may be lumping Dana’s body of crap in with actual medical studies to come up with their statistics showing that a lot of medical studies are misleading.

  34. rork says:

    I found the Atlantic article remarkably good, since it surprised me to find the author did actually seem to understand some of the issues, and Ioannidis was extensively quoted. I thought it much better than most lay-press science pieces that often can’t even get the story straight, and I am glad to see these issues discussed as much as possible. I did not think Freedman was saying to throw up hands and embrace woo at all, but perhaps I was just reading the words actually written.

  35. David Gorski says:

    Perhaps it was Freedman’s complete misreading of my anecdote. Also, I read several pieces on Freedman’s blog and even quoted the one on Andrew Wakefield. The Atlantic article in and of itself may not be as bad as, for instance, The Atlantic‘s H1N1 article last fall, but it does leave the reader with the erroneous impression that the evidence base behind modern medicine is so hopelessly biased and shaky that we can’t ever really know what does and does not work–at least not right now.

  36. Th1Th2 says:

    Harriet,

    “The good news is that despite the drawbacks of polypharmacy, we have good evidence that modern pharmaceuticals (singly and in combination) can save lives. Which Dana’s homeopathy can’t.”

    It’s more like a desperate rescue attempt to save the lives of long-time, yet faithful customers who had suffered irreversible damages as an outcome of allopathic and homeopathic treatment regardless. That’s why healthy people don’t need them.

    1. Harriet Hall says:

      Th1Th2 got something right again! Healthy people don’t need drugs to treat illnesses they don’t have!

  37. There’s an irony to this whole issue. Statistics in particular and evidence based medicine in general is a rare discipline that is largely self-critical, and thus largely self-correcting. I make this point in a webpage about the post-modern criticisms of evidence based medicine:

    * http://www.pmean.com/07/PostModernAssault.html

    There is a rather fascinating circular argument here. Dr. Ioannidis is using statistics to prove that most statistical findings are false positives. So is his finding also a false positive? You see this in other areas besides the work of Dr. Ioannidis. Many flaws in meta-analysis (e.g., language bias) have been identified through the process of meta-analysis.

    We need a lot more research about the research process, and if some people overreact when this research produces results critical of the research process, then so be it. I’d rather have that than a process that deliberately hid its flaws.

    Steve Simon, http://www.pmean.com

  38. wales says:

    Chris said “I looked at Freedman’s webpage, and he looks like a journalist who has written about management. I doubt he has ever done any science, but thinks he knows about it because he read about it.”

    I don’t know what website Chris is referring to, because Freedman’s website shows he has written/is writing for Scientific American, Science, Wired and Discover, among other publications, and has written books about the U.S. Marines, computer crime and artificial intelligence. There is no emphasis on the topic of management. Freedman’s recent book “Wrong” is about experts in all fields: science, business, journalism, etc. It received many positive reviews from NY Times, Washington Post Book World, etc, including Julian Sheather in the August 21 BMJ and David Voelker in the July 14 eSkeptic, the newsletter of the Skeptics Society.

    Freedman’s response to DG’s comments is here http://www.msomed.org/

    Here is a relevant blog discussion on Richard Smith’s site on the subject of Ioannidis’ work http://blogs.bmj.com/bmj/2010/10/20/richard-smith-important-study-points-towards-a-different-future/

  39. Thanks for the links, wales. (I was surprised how silly Freedman’s response to the post was.)

  40. weing says:

    Another possible reason for the misleading results of published papers may be the clever use of Simpson’s paradox. I wish I had the time to look into that.

  41. wales says:

    DG’s link to Freedman’ site doesn’t work. Try this one http://www.freedman.com/

    Also, this is pretty funny coming from DG “I first note the title of the article (Lies, Damned Lies, and Medical Science) is intentionally and unnecessarily inflammatory.” When did this methodology become problematic? Or perhaps it is only acceptable when writing under a pseudonym?

  42. Chris says:

    He is still just a journalist, and his books are on disparate subjects like the Marines, organization and computer crime. His own words includes “I’ve spoken to numerous executive, student, scientific and government audiences about science, technology and management issues.” I don’t see anything on his education.

    He is not an expert, more of a jack of all trades. As Dr. Gorski points out he is capable of error.

    Let’s face it he is not a a Carl Zimmer, or a Ben Goldacre, or a Simon Singh, or a Dan Ariely, or an R. Barker Bausell or a Charles Seife or a Paul Offit or a Lawrence Krauss… need I go on?

  43. wales says:

    ‘As Dr. Gorski points out he is capable of error. ” Well aren’t we all? Even your list of exalted experts? Thanks for pointing out the obvious. In fact the fallibility of experts is the point of Freedman’s book “Wrong”.

  44. wales says:

    When medical recommendations, treatment protocols and standards of care are continually changing due to ever new research discoveries, risks are presented to medical consumers. At one level, amongst medical researchers and physicians, the fact that sbm “changes with the evidence” may seem like progress. At another level, that of the medical consumer, risk is risk, whether it derives from unresearched alternative medicine recommendations or thoroughly researched and seemingly proven (but later disproven) science based medicine recommendations. Caveat emptor. Gotta run.

  45. Chris says:

    Read about his other books (from Amazon):

    Corps Business: The 30 Management Principles of the U.S. Marines (it is a book about management):

    For this book Freedman, a senior editor at Forbes ASAP and author of Brainmakers, trained with the Corps and interviewed scores of marines of every rank to discover 31 management principles “built around simple truths about human nature and the uncertainties of dynamic environments.

    A Perfect Mess: The Hidden Benefits of Disorder – How Crammed Closets, Cluttered Offices, and on-the-Fly Planning Make the World a Better Place (another business management book):

    The premise of this pop business book should generate reader goodwill—who won’t appreciate being told that her messy desk is “perfect”? But despite their convincing defense of sloppy workstations, Columbia management professor Abrahamson (Change Without Pain) and author Freedman (Corps Business, etc.) squander their reader’s indulgence by the end.

    Brainmakers: How Scientists Moving Beyond Computers Create Rival to Humn Brain (a book about science!):

    Freelance science writer Freedman’s compelling state-of-the-art report on the quest to build human-like thinking machines explores how the field of artificial intelligence is being reinvigorated through AI researchers’ interface with neuroscience, biology and robotics.

    At Large: The Strange Case of the World’s Biggest Internet Invasion (quote cherry picked for my own amusement!):

    The epilog succumbs to preachiness on the topic of computer and network security. More riveting accounts of computer crime can be found in two books from Jonathan Littman, The Fugitive Game (LJ 1/96) and The Watchman (LJ 2/15/97).?Joe Accardi, Northeastern Illinois Univ. Lib., Chicago

    So we have at least two business books, one on the science of artificial intelligence and a computer crime book (which when you look closer, it is another kind of book on management). The review of Wrong indicates it is also a management type of book. Look he even gives advice, from one of the reviews: “Freedman provides 11 never-fail rules for not being misled—but of course, he admits, he could be wrong”.

    I see no compelling reason to believe he is an expert on science research. Which I saw someone on another blog compare to herding cats.

    If one were to read about science, errors and interpretation then Ben Goldacre’s Bad Science, and R. Barker Bausell’s Snake Oil Science would be better choices.

  46. wales says:

    Chris, you’re expending a lot of energy trying to prove you weren’t wrong about characterizing Freedman as a management writer. I really don’t care, my point is that he has written about science and he wrote a well received book on the fallibility of experts of all stripes.

    Further, as I mentioned above, to quote Freedman’s Atlantic article, where he paraphrases Ioannidis ““most medical interventions and advice don’t address life-and-death situations, but rather aim to leave us marginally healthier or less unhealthy, so we usually neither gain nor risk all that much.”

    Except of course for more serious blunders such as HRT, as well as others. Where a treatment was supposed to marginally improve a patient’s life and ended up dramatically harming it.

  47. Harriet Hall says:

    wales says “risk is risk, whether it derives from unresearched alternative medicine recommendations or thoroughly researched and seemingly proven (but later disproven) science based medicine recommendations.”

    False dichotomy. There are also thoroughly researched and seemingly proven science based medicine recommendations that are not later disproven.

    It’s a gamble, but a rational gambler would look at the odds and would bet on researched science based medicine recommendations rather than unresearched recommendations.

  48. Harriet Hall says:

    wales says “Except of course for more serious blunders such as HRT, as well as others. Where a treatment was supposed to marginally improve a patient’s life and ended up dramatically harming it.”

    This is an oft-repeated claim that is simply false. There were both benefits and harms from HRT. HRT did relieve hot flashes, prevent osteoporotic fractures, decrease the rate of colon cancers, etc. It also increased the rate of cardiovascular events and breast cancer, but it did not increase overall mortality.

  49. wales says:

    HH — A “rational (and skeptical) gambler” has another option, especially for marginal medical issues: wait and see. Why gamble at all?

  50. Harriet Hall says:

    wales,

    Why gamble at all? For marginal medical issues, it’s a matter of whether you are willing to endure annoying symptoms: many patients think marginal issues are worth a gamble. For more serious medical issues, it might be a matter of life and death.

  51. Chris says:

    wales, I just looked at a simple Amazon page and did some cut and pasting. You seem to be spending lots of time trying to say he has the ultimate truth on medical research.

    You cannot say that because one piece of research is wrong, then all of it is worthless.

    There is no comparison for the myriad of legitimate medical research going on around the world, to the the absolute lack of even a modicum of effort by those in the alt-med world to comply to the bare basics of good science.

    And those who actually are real science have no problem finding out they were in error. They will correct those errors and continue on. You cannot paint them with the same brush as the few who have faked data or hidden their mistakes.

    And if you are going to get advice on research management, get it from someone who knows what they are doing, not a magazine writer.

  52. wales says:

    HH: “It also increased the rate of cardiovascular events and breast cancer, but it did not increase overall mortality.” Mortality is not the only measure of serious harm. Trading hot flashes for breast cancer and cardiovascular events doesn’t seem like a good trade. “Many patients think marginal issues are worth a gamble.” Of course they do, especially when they are reassured by their physicians that “studies show” a treatment to be safe and effective. As I said before, caveat emptor.

    Chris: “You seem to be spending lots of time trying to say he has the ultimate truth on medical research. ” I neither said nor implied such a thing. I am saying he his worth listening to and that there is a problem that needs addressing. “You cannot say that because one piece of research is wrong, then all of it is worthless.” I didn’t.

    It’s been fun, but other things are demanding attention at the moment.

  53. Chris says:

    You brought up HRT. The implication being that was flawed, so all medical research is flawed.

    And I am saying there are better qualified people to read. I doubt Mr. Freedman has ever worked in any kind of science or technical organization, so I am baffled why he thinks he can write about their management. If you have any evidence from his resume that he has a science degree, has worked for a technical firm or even some kind of lab, please present it.

    Though both Dr. Goldacre and Dr. Bausell have worked in research.

    And which physicians say ““studies show” a treatment to be safe and effective”?

    In the real world the risks are evaluated, and the patient’s past history is taken into account. This is why I do not get narcotic pain medication, since I am among the 10% who become nauseous from the stuff. (by the way, that little statistic is even in the Drug Info page)

  54. Th1Th2 says:

    Chris,

    “This is why I do not get narcotic pain medication, since I am among the 10% who become nauseous from the stuff. (by the way, that little statistic is even in the Drug Info page)”

    Patients who are dependent (addicted) to narcotics do not exhibit N/V as a side-effect. You should have gotten a smaller dose and once you’re addicted to it you’ll enjoy it.

  55. Chris says:

    (only because Th1Th2 is amusingly stupid and illiterate I guess I must explain… I cannot take narcotics like Oxycontin because after taking one I end up praying at the porcelain throne with gallons of vomit, which is what severe nausea does, and it is very unpleasant when there is a cast on one leg because of a broken bone… there is no way to get addicted to something one’s body cannot tolerate — and let me add you are still a joke who needs to learn how to use a dictionary)

    In contrast to the research management book by Freedman (who has probably never done research outside of a library), here is a recent review:

    The inability to distinguish between fact and fantasy can have grave outcomes. Willful manipulation of the results of poorly designed studies, cherry picking of data (stressing positive outcomes and spinning or ignoring negative ones), and outright scientific fraud have all resulted in poor outcomes for patients, including the worsening of disease and even death.

  56. Chris says:

    Oops, forgot to add it is a review of Ben Goldacre’s book. Someone who has done research, and is seething mad over claims by pharmaceutical companies that turn out to be bogus… on medications he has prescribed. As he states in this radio interview.

    (and yes, I did check out his twitter feed, why do you ask?)

  57. Th1Th2 says:

    Chris,

    I bet that wasn’t your first experience with a narcotic drug. You were prolly given one, something parenteral, just before the Ortho fixed you. Anyway, I’m sorry to hear about your misfortune. No offense here just my 2 ¢.

  58. Chris says:

    What the hell do you mean by “Ortho”? The pesticide company? An orthodontist? Are you on narcotics now?

    Sure, idiot, I was given Demerol through a drip to relieve contractions while in labor (not continuously, it was one shot into the line, I have never had an epidural)… which made it more complicated because I had to get up and vomit. Something you don’t know about since you have never had any children nor been around small children. Fortunately I have quick labors (that one was only four hours), unfortunately I was still sick to my stomach for most of the day.

    I thought it was just a specific kind of narcotic, it took a broken bone to realize it was all of them!

    At that point I was able to get the 10% figure from MedlinePlus, but now they revamped it to make more user friendly. But it still has that Demerol can cause:
    # lightheadedness
    # dizziness
    # weakness
    # headache
    # extreme calm
    # mood changes
    # confusion
    # agitation
    # nausea
    # vomiting

    The list for Oxycontin, aka as Oxycodone is:
    # nausea
    # vomiting

    Oh, wow… the first two!

    And the reason this happens is because there are variations in DNA. It may actually be related to the reason I think cilantro tastes like soap (trust me it is horrible stuff, stay away from it… really).

    And this is why there are variations in medical studies: you cannot make general statements in biology due to the variations in the population.

    So a study shows that some treatment for Condition Gamma works 60% of the time for Problem A… except that it was only done in Population Y. It does not tell how well Condition Gamma responds to Treatment B for Population X.

  59. JMB says:

    From the link to Freedman’s article provided by Dr Gorski, I would like to address some points in the Atlantic article about bias and funding.

    ********************
    Take the example of a scientific claim described in Freedman’s article,

    the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names.

    The first simple observation is that the experiment was designed by the “newly minted doctor” who was asked by a professor to prove the opinion she stated. She then produced a clinical series experiment confirming her bias. Kind of ironic that later in the article we are supposed to be decrying,

    Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them.

    at the same time we are praising the newly minted doctor for the same process of seeking to prove a hypothesis.

    The second point in discussing this example from the article is, knowing that the newly minted doctor had a known bias, would you completely discredit the study as being invalid? Would the correct science based decision be that licensing authorities would contact surgery residency program directors and notify that false positive appendectomies will be monitored, and that if a cultural bias is detected, the license of the residency training program may be in jeopardy.

    I don’t think bias can ever be completely eliminated. In experimental design, all precautions must be taken to minimize bias. Bias has to be considered in the analysis of published scientific claims, and can have serious consequences. Any bias that cannot be eliminated should be openly discussed in a scientific article. It does not mean that the science basis for medicine is 90% discredited. The scientific basis has already discredited about 60% of those scientific medical claims. So only a minority of the scientific basis would be in jeopardy if Ioannidis’ claim was accurate. That was of course, Dr Gorski’s point.

    So do doctors want to hide this information from patients? Most do not want to hide the information, but they want to avoid the confusion. The raw scientific information is very confusing, and a significant part of a physician’s training (at least in residency), is how to assess the conflicting claims, in order to give the best information to the patient for an informed decision. Frankly, I think it’s rather comical to watch the news media report conflicting results every other month.

    ********************

    A shortcoming in the idea that,

    There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.

    is the lack of recognition of the publish or perish syndrome. It is not just about funding. College promotion/tenure committees establish certain criteria. Most require publication of articles in peer reviewed journals with a threshold impact factor. A few may require awards of grants for promotion/tenure. Promotion/tenure is the primary motivator for much of the junk science that is published. If that incentive is addressed, we would see the volume of publications reduced, and the quality improved. Even allowing Ioannidis’ 10% Bayes’ factor estimate of bias, the final figure would probably drop from 90% to 70%. There are certainly other motivators, but promotion/tenure is one of the most ubiquitous. It isn’t greed, it’s survival in an academic career.

    ********************

    @pschability

    Not sure I understand how mortality is more difficult to demonstrate in studies than soft outcomes.

    Depending on the age group you are dealing with, and the duration of the experiment, many deaths can occur in both the experimental and control group unrelated to the disease process under study. For example, if we take a group of women aged 40 to 50, and follow them for ten years. Of those that die in those ten years, perhaps only 7% die of breast cancer. Higher percentages may die from traffic accidents, other accidents, heart disease, infections, or other cancers combined. If the intervention we are testing is only going to reduce breast cancer deaths by 20%, then we are trying to detect a difference between the control and intervention arm of the trial of 1.4% of mortalities. Considering that only a small percentage of women will die in ten years who are between 40 and 50, then the total number of women that will have to be included in the trial to detect the small percentage difference is quite large. Even the large studies designed to include enough subjects to have the statistical power to address mortality, often don’t finish with enough subjects for adequate statistical power. Most then fall back on the classification of the cause of death as the endpoint. Of course, that classification is a potential source of bias. At least enough RCTs have been completed with reasonably similar results to suggest that the bias was adequately controlled (there will of course be those that disagree with this).

  60. David Gorski says:

    Also, this is pretty funny coming from DG “I first note the title of the article (Lies, Damned Lies, and Medical Science) is intentionally and unnecessarily inflammatory.” When did this methodology become problematic? Or perhaps it is only acceptable when writing under a pseudonym?

    Given that he’s a writer, it beggars the imagination that I should have to point out to Mr. Freedman that my posts under a pseudonym (or here on SBM, for that matter) aren’t articles in The Atlantic. They’re blog posts. Different rules and norms apply. Mr. Freedman appears to be ignoring that distinction. Does Mr. Freedman write the same way for a book as he does for an article as he does for one of his own blog posts? Of course not! If I were writing an article for The Atlantic or other respected magazine, I would not write using the same styles or tones that I routinely adopt here and on my other blog. (For one thing, I’d definitely have to lose my tendency towards logorrhea and be a lot more concise and disciplined in my self-editing; the introductory two paragraphs would have to go, as would nearly all my inimitable asides.) I’m afraid Mr. Freedman strikes me as just being a bit peeved that I criticized his piece after he had quoted my blog in it. Sorry about that, but the article annoyed me.

    Oh, well. I guess I probably won’t be quoted by Mr. Freedman again, at least any time soon.

  61. Dawn says:

    @Chris: yeah, nausea (the feeling you need to vomit) and vomiting (the actual physical action of emesis) are usually the first and most common side effects of narcotics. I get them from some narcotics, not from others. However, since intractable vomiting is not fun, I always tell physicians that if they are going to give me narcotics that they must give me an anti-emetic first. Doesn’t always prevent the vomiting, but usually lessons the duration and amount.

    @Th1Th2: (and no, I don’t know why I’m feeding the troll, guess it’s because my day is going so slowly) you are aware that there are 2 types of addiction, right? And that lower doses don’t necessarily or even usually lead to either type of addiction? Your knowledge of narcotics seems to be on par with your knowledge of vaccines, immunology, and other subjects.

  62. Th1Th2 says:

    Chris,

    “And the reason this happens is because there are variations in DNA. It may actually be related to the reason I think cilantro tastes like soap (trust me it is horrible stuff, stay away from it… really).”

    Who are you trying to fool here? I’ve heard of that before from a patient who had experienced withdrawal symptoms guess what, with severe N/V. What makes you different? Narcotics are habit- forming. Patients who are on it, started off with smaller dose just enough to not cause emesis and the rest is how Modern Medicine developed drug addiction among patients. You’re trying to justify symptom appearance based on the drug but in fact they are all just the same. It’s just a matter of dosage and route of administration. You know garbage in, garbage out.

    It’s not the DNA but rather denial and manipulative behavior.

  63. Th1Th2 says:

    Dawn,

    “And that lower doses don’t necessarily or even usually lead to either type of addiction?”

    Oh that’s exactly how Modern Medicine establishes the foundation of a soon-to-be drug addict. They titrate narcotics PRN. You see street junkies go to the drug pusher or source ‘as needed’ too. You know how alcoholics used to spit out their first taste of alcohol? And right now, they are hitting it big time.

  64. Th1Th2 – So I guess if you ever have an accident or illness, say severe burns or liver cancer, you’ll be narcotics free? Good luck with that.

  65. Th1Th2 says:

    On how I plan to be in an accident or to be sick is NOT my priority thus far. Why the need to rush things?

  66. So your main concern right now is how other patients handle traumatic pain? You have not thought to put yourself in the patient’s place before deciding what medications “modern medicine”* should offer to deal with the pain?

    *calling narcotics modern medicine is a bit of a stretch by the way, opium has been around for awhile, since Mesopotamia** at least.

    **I love google and google loves me.

  67. weing says:

    micheleinmichigan,

    It’s modern medicine if it works. Doesn’t matter if it’s several thousand years old. If it doesn’t work, it’s CAM, even if developed yesterday.

  68. Th1Th2 says:

    Are narcotics the only solution? Admit it most people are too weak to cope with pain and even more so with addiction.

  69. I stand corrected, weing, I guess I was going by art terminology. Giotto was damn good, but I wouldn’t call him modern. :)

  70. Th1Th2 says:

    BTW experienced narcotic users are very ‘smart’ people. They know exactly how to manipulate the persons around them. You see doctors will never refuse a drug seeking patient.

    1. Harriet Hall says:

      Th1Th2 says “doctors will never refuse a drug seeking patient.”

      Ha ha ha! ROTFL.

  71. Th1Th2 says:

    “calling narcotics modern medicine is a bit of a stretch by the way, opium has been around for awhile, since Mesopotamia** at least”

    So what’s the point? People got addicted to it.

  72. Toiletman says:

    I guess we are not really talking about narcotics in general here but more precisely about opioids as pain medication. I actually don’t see much reason to speak against opioids. It’s not that we have any alternative for stronger pain yet, atleast if you don’t want to get conotoxines intracranially. Opioids can be addictive but the risks for that can be reduced with proper education for both patients and doctors (those who don’t work with that group of medication regularly often have too little experience). One relatively simple way is to use extended release ones to avoid the kind of “kick” intentional drug users desire much. Another one is having a fixed time schedule for the extended release ones.

    While more research for better drugs is always good, opioids are the best we have so far and simply essential for modern medicine.

  73. Th1Th2 “Are narcotics the only solution?”

    Seems you might want to answer that question before deciding that “modern medicine” is equivalent to a street pusher.

    “Admit it most people are too weak to cope with pain and even more so with addiction.”

    Yes, I admit it, I have seen incredibly strong and good people crippled by pain. I have also seen children that have had to undergo painful surgeries and need pain medication so that they can eat and drink in order to heal probably. Is that what you call weak? The vast majority of people that I have known that have used a narcotics for pain relief have had no problem with addiction. Still, knowing there is a chance of addiction in some circumstances, I think it would be unfeeling to withhold pain relief from a patient because you believe that they are ‘too weak’ to deal with addiction.

  74. Tolietman, very sensible answer. Thank you.

    By the way, the juxtaposition of your user name and the sensible answer is quite delightful. Thanks for that as well.

  75. This billy goat’s gotta head on over to greener pastures.

  76. Th1Th2 says:

    Micheleinmichigan,

    “Seems you might want to answer that question before deciding that “modern medicine” is equivalent to a street pusher.”

    Certainly, patients, like junkies, will not dare to trade narcotics for a bottle of Tylenol or something.

    “Yes, I admit it, I have seen incredibly strong and good people crippled by pain. I have also seen children that have had to undergo painful surgeries and need pain medication so that they can eat and drink in order to heal probably. Is that what you call weak?”

    What do narcotics got to do with it?

    “The vast majority of people that I have known that have used a narcotics for pain relief have had no problem with addiction.”

    Because they are hooked on it already. They don’t have to accept the fact they are addicted. All they have to do is to deny so that they won’t be compared or labeled as junkies.

    “, I think it would be unfeeling to withhold pain relief from a patient because you believe that they are ‘too weak’ to deal with addiction.”

    Like I said, pain relief can be achieved with the use of NON-narcotic drugs but since most patients are too weak to handle acute pain, they would prefer potent narcotics knowing the fact it’s habit-forming.

  77. Harriet Hall says:

    Th1Th2,

    Ha, ha, ha! ROTFL. Thanks for the comic relief.

  78. Th1Th2 – You should try reality at some point. It’s addictive.

  79. Dawn says:

    OK. Now I’m just flabbergasted. No, actually, now that I think about it more, I’m not surprised. After all, Th1Th2 has shown before that he/she doesn’t have a clue. So, my daughter after her tonsillectomy, myself after major surgery, we all became addicted to medications? Weird, I thought addiction meant that you had to have them or go through withdrawal. Instead, only taking them for severe pain the day/day after surgery, dropping down to advil/tylenol a day or so later, and not taking anything at all after that means we are all addicted? OK, then.

    Th1Th2: you obviously know nothing about addiction and narcotics, but are a product of a mentality where any narcotic will addict you instantly and make you a drug addict forever.

    Harriet Hall: I’m beginning to think Th1Th2 is really rather scary, and I hope I never meet him/her in real life. Anyone who is that rigid in their thinking is scary.

  80. “Harriet Hall: I’m beginning to think Th1Th2 is really rather scary, and I hope I never meet him/her in real life.”

    Actually, Th1Th2 is rather like my father was… I should know better that to let him/her draw me in, but sometimes I guess I can’t help it, for old times sake.

  81. Chris says:

    I don’t know whether the thing that keeps slithering out from under the bridge is a thirteen year old who has never had its wisdom teeth removed…. or some low end employee of a hospital (which I think was suggested on another thread).

    If it is the latter, it would be scary that as it pushes its cart around the ER if it tells folks on gurneys with splits on various limbs that they purposely broke bones just to get narcotics. If it did it to me when I was in the ER after breaking my ankle, I might have upchucked all over it — and not just because I cannot tolerate narcotics, but for the state of its teeth!

  82. Toiletman says:

    I personally think it’s just the usual kind of internet troll. I wonder what happens if you feed the troll with narcotics. I’m not sure since I’m a foreigner from continental Europe but do count antipsychotics as narcotics overthere? If we take the word literally, then atleast those with lower potency should be classified as such due to their strong sedative/hypnotic effects…

    But seriously, the things this user says make no sense at all. The non-narcotic painkillers (with the exception of the intracranial sea slug secrete and licking the antinicotinergic skin of tropic frogs) simply don’t have any effect on severe pain.

  83. Chris says:

    Toiletman, check the link I offered when it first appeared. It is an interesting study in the Dunning-Kruger effect. I just so love getting parenting “advice” from someone who has no clue about child development!

  84. pmoran says:

    Wales: Further, as I mentioned above, to quote Freedman’s Atlantic article, where he paraphrases Ioannidis ““most medical interventions and advice don’t address life-and-death situations, but rather aim to leave us marginally healthier or less unhealthy, so we usually neither gain nor risk all that much.”

    I can’t find that quotation, but I was about to make the same point. There are some unnecessarily alarmist and spiteful interpretations of Ionnadis’ work, but is this an attempt by Freedman and Ionnadis himself to put it into a proper perspective?

    The vast majority of clinical research does not involve major changes in clinical practice. It comes from doctors straining for incremental improvements in treatments that have usually already been validated by literally hundreds of clinical studies, or from “Big Pharm” comparing new drugs with very similar ones, hoping for the small improvements in effectiveness or safety that can translate into market share.

    Thus it happens that a lot of the time medical research is looking for benefits or risks that are close to the limits of detection in clinical studies of practical size.

    This is what P> .05 really means.

    It means that a “positive” finding in the usual kind of clinical trial has only about a 95% chance of being right, and that is before you even look at its size, other measures of quality, and other factors such as possible bias of the investigators. I have no problem with the public knowing this; it helps them understand why completely implausible treatments such as homeopathy will sometimes appear to perform better than placebo in clinical studies.

    It is not even news, although Ionnadis has made us more aware of the various mechanisms that can distort bodies of research, for example in this one

    http://www.bmj.com/content/341/bmj.c4875.extract

    I still wish he had not made the “most published research papers are wrong” comment.

  85. wales says:

    I know this isn’t really relevant to the topic at hand, but I had to see if anyone else is flabbergasted by this.

    http://www.npr.org/blogs/health/2010/10/27/130857472/top-10-federal-fraud-settlements-had-health-twist

    and this

    http://www.taf.org/top20.htm

    Why is it that the largest fraud and false claims suits involve healthcare entities?

  86. wales says:

    Make that primarily pharmaceutical companies.

  87. Chris says:

    wales, that is terrible, plus it is old news. It is kind of what I pointed out after you left last night and I found Ben Goldacre’s twitter stream. I see you did not bother clicking on those links.

    I also think that constantly pointing out an issue that we all know about smacks of this kind of logic:

    1) Look Big Pharma did something bad!

    2) This makes that drug bad!

    3) Therefore all of the drugs are bad!

    Yeah, yeah, yeah… you are going to come back and whine that you did not mean that. But just the fact that you posted the link means you have not even read Dr. Moran’s comment.

  88. wales says:

    fiscal year ending Sept 30, 2010 is old news? really?

  89. wales says:

    Chris you are getting sloppy. Novella’s Oct 20 piece has absolutely nothing to do with the NPR piece I cited which itemizes the top ten federal fraud settlements of 2010.

  90. wales says:

    Constantly pointing out an issue that we already know about is bad? If that was true then the sbm blog would be out of business, and would have run out of steam on the repetitive carping about alternative medicine. Can’t have it both ways.

  91. weing says:

    “Why is it that the largest fraud and false claims suits involve healthcare entities?”

    Where have you been the last few years? Oh, that’s right. The billions defrauded from taxpayers by financial institutions, government agencies, etc, don’t count because, except for Madoff, they haven’t been prosecuted.

  92. Dr Benway says:

    Why is it that the largest fraud and false claims suits involve healthcare entities?

    My guess:
    1. MBAs and marketers run the show rather than people who understand illness and its treatment (aside: if your local hospital offers Reiki it’s likely a for-profit operation).
    2. There is something wrong with what we teach MBAs and marketers at our centers of higher education.
    3. Healthcare billing rules have become ridiculously complex.
    3. States no longer comprehend the complexity and so outsource management of their Medicaid programs to BC/BS and the like.
    4. BC/BS and the like further outsource “carve outs” like mental health care to managed care companies like Magellan.
    5. Aspiring fraudsters gravitate toward confusing environments with lots of middle-men.
    6. Successful lawyers are successful; the cottage healthcare whistleblower industry is expanding.
    7. The government is broke. Drug companies are not (well, were not before the smackdowns).

    Where are we headed?

    BigPharma will invest in supplements and “boutique” services where they will not be called to account and where the public are easily fooled. And because you can’t really separate academic medicine from that which funds it, well… more tolerance for the crazy at our med schools.

  93. Th1Th2 says:

    Dawn,

    “you obviously know nothing about addiction and narcotics, but are a product of a mentality where any narcotic will addict you instantly and make you a drug addict forever. ”

    Instantly? But that can be the start. Fortunately for you, whatever kind of ‘major’ operation you have had did not warrant long-term narcotic usage or it could be that your pain level was easily managed with non-narcotics. Either way, narcotics was still given as an option.

  94. Th1Th2 says:

    wales,

    “http://www.npr.org/blogs/health/2010/10/27/130857472/top-10-federal-fraud-settlements-had-health-twist

    and this

    http://www.taf.org/top20.htm

    OUCH!

    Blasphemy. No one dares to attack this church like that. They are infallible, can’t you see. :)

  95. wales says:

    weing, I worked in the investment and banking industry for 20 years, there was and is plenty of prosecution and fraud going on and lots of hand slapping of large firms who have to cough up big bucks, which is why I was so surprised that the healthcare fraud outsizes the investment industry fraud.

  96. weing says:

    Let’s see, Enron, Fannie and Freddie just for starters. Are you sure we didn’t get taken to the cleaners by these crooks? The fraud perpetrated by them and the mortgage writers, I suspect, is much greater than big pharma. If you can show otherwise by giving me the numbers to compare, I can be convinced. I recall reading somewhere that there is a legal ruling that a manager is obligated to break the law if doing so is profitable for the company. The cost of breaking the law is factored as a business cost. I’ll have to find where I read that, but I think it explains a lot of the problem with these corporations. Their sole obligation is for profit.

  97. Chris says:

    Stephen Simon, I am starting to read your page. As a gardener I do need to point out one thing: a tulip bulb is not a rhizome. Go to your local garden nursery and ask about tulips and rhizomes, please.

    Or to your local grocery store. An onion is a bulb, and a ginger root is a rhizome. Now compare them to a tulip root. If you have never seen one, go visit a garden store. I am sure they are still some for sale.

    wales can be ignored just like the Thing, since she/he has failed to read the article, nor any of the substantial comments (like from Dr. Moran or JMB). How much was the mortgage bailout again? Something in the hundreds of billions dollars?

Comments are closed.