Articles

Archive for Clinical Trials

The continuum of surgical research in science-based medicine

Editor’s note: Three members of the SBM blogging crew had a…very interesting meeting on Friday, one none of us expected, the details of which will be reported later this week–meaning you’d better keep reading this week if you want to find out. (Hint, hint.) However, what that means is that I was away Thursday and Friday; between the trip and the various family gatherings I didn’t have time for one of my usual 4,000 word screeds of fresh material. However, there is something I’ve been meaning to discuss on SBM, and it’s perfect for SBM. Fortunately, I did write something about it elsewhere three years ago. This seems like the perfect time to spiff it up, update it, and republish it. In doing so, I found myself writing far more than I had expected, making it a lot more different from the old post than I had expected, but I guess that’s just me.

In the meantime, the hunt for new bloggers goes on, with some promising results. If we haven’t gotten back to you yet (namely most of you), please be patient. This meeting and the holiday–not to mention my real life job–have interfered with that, too.

The continuum of surgical research in science-based medicine

One of the things about science-based medicine that makes it so fascinating is that it encompasses such a wide variety of modalities that it takes a similarly wide variety of science and scientific techniques to investigate various diseases. Some medical disciplines consist of mainly of problems that are relatively straightforward to study. Don’t get me wrong, though. By “straightforward,” I don’t mean that they’re easy, simply that the experimental design of a clinical trial to test a treatment is fairly easily encompassed by the paradigm of randomized clinical trials. Medical oncology is just one example, where new drugs can be tested in randomized, double-blinded trials against or in addition to the standard of care without having to account for many difficulties that arise from difficulties blinding. We’ve discussed such difficulties before, for instance, in the context of constructing adequate placebos for acupuncture trials. Indeed, this topic is critical to the application of science-based medicine to various “complementary and alternative medicine” modalities, which do not as easily lend themselves to randomized double-blind placebo-controlled trials, although I would hasten to point out that, just because it can be very difficult to do such trials is not an excuse for not doing them. The development of various “sham acupuncture” controls, one of which consisted even of just twirling a toothpick gently poked onto the skin, shows that.

One area of medicine where it is difficult to construct randomized controlled trials is surgery. The reasons are multiple. For one thing, it’s virtually impossible to blind the person doing the surgery to what he or she is doing. One way around that would be to have the surgeons who do the operations not be involved with the postoperative care of the patients at all, while the postoperative team doesn’t know which operation the patient actually got. However, most surgeons would consider this not only undesirable, but downright unethical. At least, I would. Another problem comes when the surgeries are sufficiently different that it is impossible to hide from the patient which operation he got. Moreover, surgery itself has a powerful placebo effect, as has been shown time and time again. Even so, surgical trials are very important and produce important results. For instance, I wrote about two trials for vertebral kyphoplasty for ostoporotic fractures, both of which produced negative results showing kyphoplasty to be no better than placebo. Some surgical trials have been critical to defining a science-based approach to how we treat patients, such as trials showing that survival rates are the same in breast cancer treated with lumpectomy and radiation therapy as they are when the treatment is mastectomy. Still, surgery is a set of disciplines where applying science-based medicine is arguably not as straightforward as it is in many specialties. At times, applying science-based medicine to it can be nearly as difficult as it is to do for various CAM modalities, mainly because of the difficulties in blinding. That’s why I’m always fascinated by strategies by which we as surgeons try to make our discipline more science-based.
(more…)

Posted in: Clinical Trials, Science and Medicine, Surgical Procedures

Leave a Comment (15) →

Genetic Testing for Patients on Coumadin

Anticoagulation is advised for patients who have had a blood clot or who are at increased risk of blood clots because of atrial fibrillation, artificial heart valves, or other conditions. Over 30 million prescriptions are written every year in the US for the anticoagulant warfarin, best known under the brand name Coumadin. Originally developed as a rat poison, warfarin has proved very effective in preventing blood clots and saving lives; but too much anticoagulation leads to the opposite problem: bleeding. A high level of Coumadin might prevent a stroke from a blood clot only to cause a stroke from an intracranial bleed. The effect varies from person to person and from day to day depending on things like the amount of vitamin K in the diet and interactions with other medications. It requires careful monitoring with blood tests, and it is tricky because there is a delay between changing the dose and seeing the results.

In his book The Language of Life, Francis Collins predicts that Coumadin will be the first drug for which the so-called Dx-Rx paradigm — a genetic test (Dx) followed by a prescription (Rx) — will enter mainstream medical practice. FDA economists have estimated that by formally integrating genetic testing into routine warfarin therapy, the US alone would avoid 85,000 serious bleeding events and 17,000 strokes annually.
A recent news release from the American College of Cardiology described a paper at their annual meeting reporting a study of

896 people who, shortly after beginning warfarin therapy, gave a blood sample or cheek swab that was analyzed for expression of two genes — CYP2C9 and VKORC1 — that revealed sensitivity to warfarin. People with high sensitivity were put on a reduced dose of warfarin and had frequent blood tests. People with low sensitivity were given a higher dose of warfarin.

During the first six months that they took warfarin, those who underwent genetic testing were 31 percent less likely to be hospitalized for any reason and 29 percent less likely to be hospitalized for bleeding or thromboembolism than were a group that did not have genetic testing.

Epstein said that the cost of the genetic testing — $250 to $400 — would be justified by reduced hospitalization costs.

At this point, I don’t believe this study. I’ll explain why I’m skeptical. (more…)

Posted in: Clinical Trials, Pharmaceuticals

Leave a Comment (18) →

The case of John Lykoudis and peptic ulcer disease revisited: Crank or visionary?

One of the themes of SBM has been, since the very beginning, how the paradigm of evidence-based medicine discounts plausibility (or, perhaps more appropriately, implausibility) when evaluating whether or not a given therapy works. One of our favorite examples is homeopathy, a therapy that is so implausible on a strictly scientific basis that, for it to work, huge swaths of well-established science supported by equally huge amounts of experimental and observational evidence would have to be found to be all in serious error. While such an occurrence is not per se impossible, it is incredibly unlikely. Moreover, for scientists actually to start to doubt our understanding of chemistry, biochemistry, pharmacology, and physics to the point of thinking that our understanding of them is in such serious error that homeopathy is a valid description of reality, it would take a lot more than a bunch of low-quality or equivocal studies that show no effect due to homeopathy detectably greater than placebo.

On Friday, Kim Atwood undertook an excellent discussion of this very issue. What really caught my attention, though, was how he educated me about a bit of medical history of which I had been completely unaware. Specifically, Kim discussed the strange case of John Lykoudis, a physician in Greece who may have discovered the etiology of peptic ulcer disease (PUD) due to H. pylori more than a quarter century before Barry Marshall and Robin Warren discovered the bacterial etiology of PUD in 1984. One reason that this story intrigued me is the same reason that it intrigued Kimball. Lykoudis’ story very much resembles that of many quacks, in particular Nicholas Gonzalez, in that he claimed results far better than what medicine could produce at the time, fought relentlessly to try to prove his ideas to the medical authorities in Greece at the time, and ultimately failed to do so. Despite his failure, however, he had a very large and loyal following of patients who fervently believed in his methods. The twist on a familiar story, however, is that Lykoudis may very well have been right and have discovered a real, effective treatment long before his time.
(more…)

Posted in: Basic Science, Clinical Trials, Science and Medicine

Leave a Comment (10) →

The 2nd Yale Research Symposium on Complementary and Integrative Medicine. Part I

March 4, 2010

Today I went to the one-day, 2nd Yale Research Symposium on Complementary and Integrative Medicine. Many of you will recall that the first version of this conference occurred in April, 2008. According to Yale’s Continuing Medical Education website, the first conference “featured presentations from experts in CAM/IM from Yale and other leading medical institutions and drew national and international attention.” That is true: some of the national attention can be reviewed here, here, here, and here; the international attention is here. (Sorry about the flippancy; it was irresistible)

I’ve not been to a conference promising similar content since about 2001, and in general I’ve no particular wish to do so. This one was different: Steve Novella, in his day job a Yale neurologist, had been invited to be part of a Moderated Discussion on Evidence and Plausibility in the Context of CAM Research and Clinical Practice. This was not to be missed.

(more…)

Posted in: Chiropractic, Clinical Trials, Health Fraud, Herbs & Supplements, Homeopathy, Medical Academia, Medical Ethics, Nutrition, Politics and Regulation, Science and Medicine

Leave a Comment (26) →

Acupuncture for Depression

One of the basic principles of science-based medicine is that a single study rarely tells us much about any complex topic. Reliable conclusions are derived from an assessment of basic science (i.e prior probability or plausibility) and a pattern of effects across multiple clinical trials. However the mainstream media generally report each study as if it is a breakthrough or the definitive answer to the question at hand. If the many e-mails I receive asking me about such studies are representative, the general public takes a similar approach, perhaps due in part to the media coverage.

I generally do not plan to report on each study that comes out as that would be an endless and ultimately pointless exercise. But occasionally focusing on a specific study is educational, especially if that study is garnering a significant amount of media attention. And so I turn my attention this week to a recent study looking at acupuncture in major depression during pregnancy. The study concludes:

The short acupuncture protocol demonstrated symptom reduction and a response rate comparable to those observed in standard depression treatments of similar length and could be a viable treatment option for depression during pregnancy.

(more…)

Posted in: Acupuncture, Clinical Trials, Neuroscience/Mental Health, Obstetrics & gynecology

Leave a Comment (144) →

Changing Your Mind

Why is my mind so clean and pure?  Because I am always changing it.
In medical school the old saying is that half of everything you learn will not be true in 10 years, the problem being they do not tell which half.
In medicine, the approach is, one hopes, that data leads to an opinion.  You have to be careful not to let opinion guide how you evaluate the data.  It is difficult to do, and I tell myself that my ego is not invested my interpretation of the data. I am not wrong, I am giving the best interpretation I can at the time. For years  I yammered on about how it made no sense to give a beta-lactam and a quinolone for sepsis until a retrospective study suggested benefit of the combination.  Bummer. Now when I talk to the housestaff about sepsis, I have to add a caveat about combination therapy.  It is why my motto is, only half jokingly,  “Frequently in error, never in doubt”.
At what point do you start to change you mind?  Alter your message as a teacher?  Have new behavior?  Medicine is not all or nothing, black and white.  Changes are incremental, and opinions change slowly, especially if results of a new study contradict commonly held conclusions from prior investigations.
Nevertheless, I am in the process of changing my mind, and it hurts.  I feel like Mr. Gumby. (http://www.youtube.com/watch? v=IIlKiRPSNGA)
It is rare that there is one study that changes everything; medicine is not an Apple product.  Occasionally that there is a landmark  study that alters practice in such a dramatic way that there is a before and after.  As I write this I cannot think of a recent example in infectious diseases, but I am sure there is one.  The problem is that once practice changes, it seems as we have always done it that way.
For me, three is the magic number.  One study that goes against received wisdom warrants an ‘interesting, but give me more.”
Two studies, especially if using different methodologies with the same results gives and ‘well, two is interesting, but I can argue against it.”  However, with two studies the seed of doubt is planted, waiting to be watered with the water of further confirmation.  Yeah. Bad metaphor.
Three studies with different methodologies independently confirming new concepts?  Then I say, “I change my mind. My brain hurts.”
There are now three studies concerning the issue of efficacy of the flu vaccine in the elderly.  You might remember my discussion of the Atlantic article several months ago. In that entry I discussed two articles  that suggested the flu vaccine may be less effective in the elderly than the studies demonstrated. https://www.sciencebasedmedicine.org/?p=2495
The argument was that the elderly who received the influenza vaccine were healthier at baseline than those that didn’t receive the vaccine and the deaths during flu season was not due to the protection from the vaccine, but due to the fact that healthier people are less likely to die when they get ill. In part this was demonstrated by showing decreased deaths in vaccinated populations when influenza was not circulating.  If insomnia is a problem, you can go back and read my post.   To quote my favorite author, me, I said
“One, it is an outlier, and outliers need confirmation. The preponderance of all the literature suggests that influenza vaccine prevents disease and death. If you do not get flu, you cannot die from flu or flu related illnesses. When outliers are published, people read them, think, “huh, that’s interesting”, but there is going to have to be more than one contradictory study to change my practice. But if “study after study” shows mortality benefit, and one study does not, it is food for thought, but not necessarily the basis of changing practice. The results, above all, needs to be repeated by others… In medicine we tend to be conservative about changing practice unless there is a preponderance of data to suggest a change is reasonable. Except, of course, if our big pharma overlords take us to a good streak house.”
Now we have a third article, “Evidence of Bias in Studies of Influenza Vaccine Effectiveness in Elderly Patients” from the Journal of Infectious Diseases.
In the study they examined the records of the elderly in the Kaiser Health System, their vaccination records, and their risk of death.  And the results were interesting.
“The percentage of the population that was vaccinated varied with age. After age 65, influenza vaccination increased until age 78 in women and age 81 in men, then decreased with increasing age. Vaccination coverage also varied in a curvilinear fashion with risk score, increasing with risk score to a risk score percentile of ∼80%, then decreasing. In addition, as the predicted probability of death increased, vaccination coverage increased. Vaccination coverage was highest among members with a probability of death of 3%–7.5%. Those with a predicted probability of death in the coming year of 17.5% had a de- creasing likelihood of influenza vaccination”
They then looked at mortality when flu was not circulating.
“A change in the pattern of vaccination had a striking effect on mortality. For members > 75 years old who had been receiving influenza vaccinations in previous years, not receiving a seasonal influenza vaccination was strongly associated with mortality in the months ahead (Table 1). A person who had received an influenza vaccination every year in the previous 5 years had a more than double probability of death outside the influenza season if he or she missed a vaccination in the current year, compared with a person who was vaccinated as usual (odds ratio, 2.17; P < .001). On the other hand, if a person did not receive any seasonal influenza vaccination in the previous 5 years, then receipt of a vaccination in the current year was associated with a greater probability of death. “
If they had a history of flu vaccine for five years and missed it, the probability of death went up.
If they did not have a flu vaccine for five years and got one, the probability of death went up.
They suggest in the first case, the patients may have had an increase in their co-morbidities and as a result did not get the vaccine and died of underlying diseases. Their increased risk of death was from accumulating prior illnesses.
In the second case, people who were healthy and did not seek care subsequently developed diseases that lead them to a doctor who advised the vaccine.  Their increase risk of death was due to new illnesses.
Either way, the uptake of the flu vaccine is more complicated than I had suspected and makes interpretation of efficacy of the vaccine in prior studies harder to evaluate.  The table shows an unexpected relationship between age, risk of death and use of the flu vaccine.
table here
They say in the discussion
“We showed that, despite strong efforts to increase vaccination among the elderly population, vaccination is relatively low in the oldest and sickest portions of the population. Persons 65 years old with a 17.5% chance of death in the upcoming year are less likely to receive the influenza vaccine. Because persons who are most likely to die are less likely to receive the vaccine, vaccination appears to be associated with a much lower chance of dying; thus, the “effectiveness” of the vaccine is in great part due to the selection of healthier individuals for vaccination, rather than due to true effectiveness of the vaccine. Previous studies have argued that worsening health is associated with increasing vaccination. We found this to be a curvilinear relationship, in which increasing illness means increasing vaccination, up to a point, and then, as people come closer to the end of life, there is a decrease in vaccination coverage.”
They do not say the vaccine is not effective, but they suggest that there is a bias that may make the vaccine appear more effective in the elderly than it really is.  Reality is often more complex than one would think at the beginning.
After three studies I am reasonably convinced that efficacy of the flu vaccine in the elderly is potentially not as well understood as I had thought.
So do I think the flu vaccine is no longer useful in the elderly?  No.  I still think it is a reasonable intervention but it may not have the efficacy I would like.  But I have always known that, for a variety of reasons, the flu vaccine is not a great vaccine. But it is better than no vaccine. There are, as discussed in the earlier post on the vaccine, many lines of evidence to show that the flu vaccine has benefit; at issue is the degree of the benefit.  Perhaps what is needed is a better vaccine with adjuvants or multiple injections to get a better result in the elderly, who respond poorly to the vaccine.  Or perhaps it will be better to focus on increasing vaccination in those who care for or have contact with the elderly.  But when I talk to my patients and residents, when I get to part about flu vaccine efficacy, I will be a little more nuanced, use more qualifiers. I will tell them that the vaccine is like seat belts.  It does not prevent all death and injury, but if you had a choice, would you not choose to use seat belts?
In the end the data has to change the way I think about medicine, not matter how much it hurts.
Compare and contrast that with the anti-vaxers who have the belief that vaccines cause autism.  They look for data to support the pre-existing belief and ignore contrary data.  Opinion does not follow from data.
The most representative statement of their approach is on the 14 studies website where they say  ““We gave this study our highest score because it appears to actually show that MMR contributes to higher autism rates.”
The key phrase in the whole site. Data that supports their position is good, data that does not is bad. What makes a study good is not its methodology or its rigor, or its reproducibility, or its biologic plausibility,  but if it supports vaccines casing autism.
Dr. Wakefield, as has been noted over the last week, had his MMR/autism paper withdrawn from Lancet not for bad science, but for dishonest science.  In medicine you can be wrong, but you cannot lie.  If the results of medical papers were shown to be fabrications, such as the papers of Scott S. Reuben, no one the medical field would defend the results.  Dr. Reuben, as you may remember https://www.sciencebasedmedicine.org/?p=408, was found to have fabricated multiple studies on the treatment of pain.  Nowhere can I find web sites defending his faked research.  No suggestions it was due to a conspiracy of big pharma to hide the truth. No assertions that he is still a physician of great renown.   He lied and is consigned to ignominy.   Physicians who used his papers as a basis of practice no longer do so, or so I would hope.
The response to Dr. Reuban is in striking contrast to the defense of Dr Wakefield, where bad research combined with unethical behavior, results in reactions like this
“It is our most sincere belief that Dr. Wakefield and parents of children with autism around the world are being subjected to a remarkable media campaign engineered by vaccine manufacturers reporting on the retraction of a paper published in The Lancet in 1998 by Dr. Wakefield and his colleagues.
The retraction from The Lancet was a response to a ruling from England’s General Medical Council, a kangaroo court where public health officials in the pocket of vaccine makers served as judge and jury. Dr. Wakefield strenuously denies all the findings of the GMC and plans a vigorous appeal.”
Opinions did not change when the Wakefield paper was demonstrated to be not just wrong but false, the researcher’s behavior unethical, and the study could not be reproduced using similar methodologies (http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0003140).  Instead, the defense of Dr. Wakefield became, well, like a Jim Carrey shtick. The Mask defends retracted autism research. Fire Marshall Bill on the medical literature.  Jenny and Jim’s defense does make more sense read as comic performance art.  Andy Kaufmann would have been proud.
I wonder if the more grounded in fiction an opinion is, the harder it is to change, the more difficult it is to admit error.  I have to admit I cannot wrap my head around the ability of people to deny reality.  It is the old Groucho line come to life, “Who are you going to believe, science or your lying eyes?”
So I will, I hope, keep changing my mind as new information come in.  It is what separates real health care providers from acupuncturists and homeopaths and naturopaths and anti-vaxers.  It is what some truly great minds admit to doing (http://www.edge.org/q2008/q08_index.html).  As one deeper thinker and better writer (http://www.emersoncentral.com/selfreliance.htm) than I said, kind of,
“The other terror that scares us from self-trust is our consistency; a reverence for our past act or word, because the eyes of others have no other data for computing our orbit than our past acts, and we are loath to disappoint them.
But why should you keep your head over your shoulder? Why drag about this corpse of your memory, lest you contradict somewhat you have stated in this or that public place? Suppose you should contradict yourself; what then? It seems to be a rule of wisdom never to rely on your memory alone, scarcely even in acts of pure memory, but to bring the past for judgment into the thousand-eyed present, and live ever in a new day. In your metaphysics you have denied personality to the Deity: yet when the devout motions of the soul come, yield to them heart and life, though they should clothe God with shape and color. Leave your theory, as Joseph his coat in the hand of the harlot, and flee.
A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines and anti-vaxers. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day. — ‘Ah, so you shall be sure to be misunderstood.’ — Is it so bad, then, to be misunderstood?”

Why is my mind so clean and pure?  Because I am always changing it.

In medical school the old saying is that half of everything you learn will not be true in 10 years, the problem being they do not tell which half.

In medicine, the approach is, one hopes, that data leads to an opinion.  You have to be careful not to let opinion guide how you evaluate the data.  It is difficult to do, and I tell myself that my ego is not invested my interpretation of the data. I am not wrong, I am giving the best interpretation I can at the time. For years  I yammered on about how it made no sense to give a beta-lactam and a quinolone for sepsis until a retrospective study suggested benefit of the combination.  Bummer. Now when I talk to the housestaff about sepsis, I have to add a caveat about combination therapy.  It is why my motto is, only half jokingly,  “Frequently in error, never in doubt”.

At what point do you start to change you mind?  Alter your message as a teacher?  Have new behavior?  Medicine is not all or nothing, black and white.  Changes are incremental, and opinions change slowly, especially if results of a new study contradict commonly held conclusions from prior investigations.

Nevertheless, I am in the process of changing my mind, and it hurts.  I feel like Mr. Gumby.

(more…)

Posted in: Clinical Trials, Science and Medicine, Vaccines

Leave a Comment (28) →

Yes, Jacqueline: EBM ought to be Synonymous with SBM

“Ridiculing RCTs and EBM”

Last week Val Jones posted a short piece on her BetterHealth blog in which she expressed her appreciation for a well-known spoof that had appeared in the British Medical Journal (BMJ) in 2003:

Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials

Dr. Val included the spoof’s abstract in her post linked above. The parachute article was intended to be humorous, and it was. It was a satire, of course. Its point was to call attention to excesses associated with the Evidence-Based Medicine (EBM) movement, especially the claim that in the absence of randomized, controlled trials (RCTs), it is not possible to comment upon the safety or efficacy of a treatment—other than to declare the treatment unproven.

A thoughtful blogger who goes by the pseudonym Laika Spoetnik took issue both with Val’s short post and with the parachute article itself, in a post entitled #NotSoFunny – Ridiculing RCTs and EBM.

Laika, whose real name is Jacqueline, identifies herself as a PhD biologist whose “work is split 75%-25% between two jobs: one as a clinical librarian in the Medical Library and one as a Trial Search Coordinator (TSC) for the Dutch Cochrane Centre.” In her post she recalled an experience that would make anyone’s blood boil:

I remember it well. As a young researcher I presented my findings in one of my first talks, at the end of which the chair killed my work with a remark that made the whole room of scientists laugh, but was really beside the point…

This was not my only encounter with scientists who try to win the debate by making fun of a theory, a finding or …people. But it is not only the witty scientist who is to *blame*, it is also the uncritical audience that just swallows it.

I have similar feelings with some journal articles or blog posts that try to ridicule EBM – or any other theory or approach. Funny, perhaps, but often misunderstood and misused by “the audience”.

Jacqueline had this to say about the parachute article:

I found the article only mildly amusing. It is so unrealistic, that it becomes absurd. Not that I don’t enjoy absurdities at times, but absurdities should not assume a life of their own.  In this way it doesn’t evoke a true discussion, but only worsens the prejudice some people already have.

(more…)

Posted in: Clinical Trials, Medical Academia, Medical Ethics, Science and Medicine

Leave a Comment (110) →

On the “individualization” of treatments in “alternative medicine”

One of the claims most frequently made by “alternative medicine” advocates regarding why alt-med is supposedly superior (or at least equal) to “conventional” medicine and should not be dismissed, regardless of how scientifically improbable any individual alt-med modality may be, is that the treatments are, if you believe many of the practitioners touting them, highly “individualized.” In other words, the “entire patient” is taken into account with what is frequently referred to as a “holistic approach” that looks at “every aspect” of the patient, with the result that every patient requires a different treatment, sometimes even for the exact same disease of very close to the same severity. Indeed, as I have described before, a variant of this claim, often laden with meaningless pseudoscientific babble about “emergent systems,” is sometimes used to claim that the standard methods of science- and evidence-based medicine are not appropriate to studying the efficacy of alternative medicine. Of course, this is, in nearly all cases, simply an excuse to dismiss scientific studies that fail to find efficacy for various “alt-med” modalities, but, even so, it is a claim that irritates me to no end, because it is so clearly nonsense. As Harriet Hall pointed out, alt-med “practitioners” frequently ascribe One True Cause to All Disease, which is about as far from “individualization” as you can get, when you come right down to it. More on that later.

A couple of years ago, before I became involved with this blog, I was surprised to learn that even some advocates of alt-med have their doubts that “individualization” is such a great strength. I had never realized that this might be the case until I came across a post by naturopath Travis Elliott, who runs a pro-alt-med blog, Dr. Travis Elliott and the Two-Sided Coin, entitled The Single Most Frustrating Thing About (Most) Alternative Medicine. In this article, Elliott referred to a case written up by a fellow naturopath, who used an anecdote about the evaluation and treatment plan by a naturopath of a pregnant woman with nausea to show what is supposedly the “unique power of our medicine.” Unexpectedly (to me at least at the time), Elliott did not quite see it that way:
(more…)

Posted in: Clinical Trials, Energy Medicine, Homeopathy, Science and Medicine

Leave a Comment (46) →

The life cycle of translational research

ResearchBlogging.orgI’m a translational researcher. To those of you who aren’t familiar with what that means, it means (I hope) that I study potential therapies in the lab and try to translate them into actual therapies that will cure patients of breast cancer — or, at the very least, improve their odds of survival or prolong survival when cure is not possible. Translational research is extremely important; indeed, it is the life blood of science-based medicine, with basic science producing the discoveries and clinical research the applications of these discoveries. When it works, it’s the way that science leads medicine to advance. However, sometimes I think that it’s a bit oversold. For one thing, it’s not easy, and it’s not always obvious what basic science findings can be translated into useful therapies, be it for cancer (my specialty) or any other disease. For another thing, it takes a long time. The problem is that the hype about how much we as a nation invest in translational research all too often leads to a not unreasonable expectation that there will be a rapid return on that investment. Such an expectation is often not realized, at least not as fast and frequently as we would like, and the reason has little to do with the quality of the science being funded. It has arguably more to do with how long it takes for a basic science observation to follow the long and winding road to producing a viable therapy. But how long is that long and winding road?

A lot longer than many, even many scientists, realize. At least, that’s the case if a paper from about a year ago by John Ioannidis in Science is any indication. The article appeared in the Policy Forum in the September 5 issue and is entitled Life Cycle of Translational Research for Medical Interventions. As you may recall, Dr. Ioannidis made a name for himself a couple of years ago by publishing a pair of articles provocatively entitled Contradicted and Initially Stronger Effects in Highly Cited Clinical Research and Why Most Published Research Findings Are False, which Steve Novella blogged about a couple of years ago.

Dr. Ioannidis lays it out right in the first paragraph:
(more…)

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (5) →

Acupuncture, the P-Value Fallacy, and Honesty

Credibility alert: the following post contains assertions and speculations by yours truly that are subject to, er, different interpretations by those who actually know what the hell they’re talking about when it comes to statistics. With hat in hand, I thank reader BKsea for calling attention to some of them. I have changed some of the wording—competently, I hope—so as not to poison the minds of less wary readers, but my original faux pas are immortalized in BKsea’s comment.

Lies, Damned Lies, and…

A few days ago my colleague, Dr. Harriet Hall, posted an article about acupuncture treatment for chronic prostatitis/chronic pelvic pain syndrome. She discussed a study that had been performed in Malaysia and reported in the American Journal of Medicine. According to the investigators,

After 10 weeks of treatment, acupuncture proved almost twice as likely as sham treatment to improve CP/CPPS symptoms. Participants receiving acupuncture were 2.4-fold more likely to experience long-term benefit than were participants receiving sham acupuncture.

The primary endpoint was to be “a 6-point decrease in NIH-CSPI total score from baseline to week 10.” At week 10, 32 of 44 subjects (73%) in the acupuncture group had experienced such a decrease, compared to 21 of 45 subjects (47%) in the sham acupuncture group. Although the authors didn’t report these statistics per se, a simple “two-proportion Z-test” (Minitab) yields the following:

Sample X   N   Sample p

1            32  44   0.727273

2           21  45   0.466667

Difference = p (1) – p (2)

Estimate for difference: 0.260606

95% CI for difference: (0.0642303, 0.456982)

Test for difference = 0 (vs not = 0): Z = 2.60 P-Value = 0.009

Fisher’s exact test: P-Value = 0.017

Wow! A P-value of 0.009! That’s some serious statistical significance. Even Fisher’s more conservative “exact test” is substantially less than the 0.05 that we’ve come to associate with “rejecting the null hypothesis,” which in this case is that there was no difference in the proportion of subjects who had experienced a 6-point decrease in NIH-CSPI scores at 10 weeks. Surely there is a big difference between getting “real” acupuncture and getting sham acupuncture if you’ve got chronic prostatitis/chronic pelvic pain syndrome, and this study proves it!

(more…)

Posted in: Acupuncture, Clinical Trials, Science and Medicine

Leave a Comment (22) →
Page 23 of 34 «...10202122232425...»