Articles

Human subjects protections and research ethics: Where the rubber hits the road for science-based medicine

Arguably the most difficult aspect of science-based medicine is where the rubber hits the road, so to speak. That’s where scientists and physicians take the results of preclinical studies performed in vitro in biochemical assays and cell culture models and in vivo in animal models to humans. There are numerous reasons for this, not the least of which that preclinical models, contrary to what animal rights activists would like you to believe, do not predict human responses to new therapeutic agents as much as we would like. However, the single biggest reason that we cannot answer questions in human studies as easily as we can in cell culture and animal studies is ethics. Of course, answering questions using cell culture and animal studies is not “easy,” either, but performing studies using human beings as subjects is an order of magnitude (at least) more difficult because the potential to cause harm exists, and if harm is caused by the experimental treatment under study, that harm will be done to human beings, rather than cells in a dish or mice bred for research.

The “gold standard” type of study that we do to test the efficacy of a new drug is known as the randomized, placebo-controlled, double-blinded study, often abbreviated RCT. Indeed, this remains the gold standard and is accorded the highest level of “power” in the framework of evidence-based medicine. Of course, as we have argued time and time again, using the RCT to test therapies that are incredibly implausible on a strictly scientific basis (homeopathy or reiki, for instance) inevitably leads to numerous “false positives” in which the therapy appears to produce results statistically significantly better than the control. John Ioannidis has done numerous clever analyses that demonstrate how easily clinical research is led astray if it is not grounded in scientific plausibility. Indeed, the probability of false positive studies increases, the more improbable the modality. It is for these very reasons that we have proposed the concept of science-based medicine, which takes into account estimates of prior probability based on preclinical studies and basic scientific principles, rather than evidence-based medicine, which does not. Indeed, Wally Sampson has even proposed a “plausibility scale” for rating RCTs, and Steve Novella has pointed out how difficult it can be to interpret the medical literature.

Leaving aside all of these issues, to which a whole series of posts could be devoted (and, indeed, to which a whole series of posts has been devoted by Kimball Atwood, Steve Novella, and Wally Sampson), I want to look at the ethical precepts that guide clinical trials and why they render it much more difficult to answer some questions. To the shame of the medical profession, these ethical precepts are a relatively recent phenomenon dating back only a few decades at most. Some of the impetus for these rules derived, not surprisingly, from the horrific medical experiments that the Nazis and Japanese performed on prisoners during World War II. Virtually everyone’s heard, for instance, of Dr. Mengele and his experiments on twins, but there was more–so much more–that the Nazis did, including testing the effects of immersion in ice water to simulate what sailors forced to abandon ship or pilots shot down over the ocean might experience and to test ways to rewarm them and save their lives; placing prisoners in vacuum chambers to test the effects of high altitude (which had obvious applications to how their pilots could deal with high altitude); sewing glass, foreign objects, or gauze impregnated with bacteria into wounds in order to test means of preventing and treating gangrene; smashing prisoner’s bones with a hammer in order to simulate more realistically war wounds; and the use of radiation aimed at the ovaries in order to sterilize quickly Jewish women and other “undesirables,” at the cost of horrific radiation injury to the surrounding bowels because the technology to aim radiation beams was very crude then. The Japanese were no slouches at depraved medical experiments, either, performing “dissections” on prisoners while they were still alive; testing phosgene gas on prisoners; decapitating prisoners to test the sharpness of blades; amputating limbs to study shock due to hemorrhage; performing gastrectomies and testing reattachments of the esophagus to various parts of the intestinal tract; and testing the effect of various pathogenic microorganisms on prisoners to see what inoculum was necessary to cause disease.

Then there was the Tuskegee syphilis experiment, formally known as The Tuskegee Study of Untreated Syphilis in the Negro Male. Carried out between 1932 and 1972 by the U.S. Public Health Service, the study involved 399 African-American men with syphilis who were studied to determine the effects of untreated syphilis in black men as opposed to whites, the hypothesis being that whites experience more neurological complications due to syphilis, whereas blacks were thought to be more susceptible to cardiovascular damage. What horrified the nation in 1972, when details of the experiment were first reported Jean Heller of the Associated Press, was that the study continued after the 1940s, when penicillin had been validated as curative for syphilis. The experiment also involved considerable deception:

By the end of the experiment, 28 of the men had died directly of syphilis, 100 were dead of related complications, 40 of their wives had been infected, and 19 of their children had been born with congenital syphilis. How had these men been induced to endure a fatal disease in the name of science?

To persuade the community to support the experiment, one of the original doctors admitted it “was necessary to carry on this study under the guise of a demonstration and provide treatment.” At first, the men were prescribed the syphilis remedies of the day —bismuth, neoarsphenamine, and mercury— but in such small amounts that only 3 percent showed any improvement.

These token doses of medicine were good public relations and did not interfere with the true aims of the study. Eventually, all syphilis treatment was replaced with “pink medicine” —aspirin.

To ensure that the men would show up for a painful and potentially dangerous spinal tap, the PHS doctors misled them with a letter full of promotional hype: “Last Chance for Special Free Treatment.” The fact that autopsies would eventually be required was also concealed.

As a doctor explained, “If the colored population becomes aware that accepting free hospital care means a post-mortem, every darky will leave Macon County…” Even the Surgeon General of the United States participated in enticing the men to remain in the experiment, sending them certificates of appreciation after 25 years in the study

It wasn’t until 25 years after the revelation of the ethics violations that had occurred during the Tuskegee syphilis experiment were revealed that the U.S. government acknowledged how wrong the experiment had been and then President Clinton formally apologized.

In response to these sorts of abuses, efforts were undertaken, both in the U.S. and internationally, to codify ethical principles that would prevent such atrocities and abuses. In the aftermath of World War II, the first major effort to regulate medical research was the Nuremberg Code, which was developed from the verdict of the Doctors’ Trial in 1947. There were ten points

  1. The voluntary consent of the human subject is absolutely essential. This means that the person involved should have legal capacity to give consent; should be so situated as to be able to exercise free power of choice, without the intervention of any element of force, fraud, deceit, duress, over-reaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the elements of the subject matter involved as to enable him to make an understanding and enlightened decision. This latter element requires that before the acceptance of an affirmative decision by the experimental subject there should be made known to him the nature, duration, and purpose of the experiment; the method and means by which it is to be conducted; all inconveniences and hazards reasonable to be expected; and the effects upon his health or person which may possibly come from his participation in the experiment. The duty and responsibility for ascertaining the quality of the consent rests upon each individual who initiates, directs or engages in the experiment. It is a personal duty and responsibility which may not be delegated to another with impunity.
  2. The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random and unnecessary in nature.
  3. The experiment should be so designed and based on the results of animal experimentation and a knowledge of the natural history of the disease or other problem under study that the anticipated results will justify the performance of the experiment.
  4. The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury.
  5. No experiment should be conducted where there is a prior reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects.
  6. The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment.
  7. Proper preparations should be made and adequate facilities provided to protect the experimental subject against even remote possibilities of injury, disability, or death.
  8. The experiment should be conducted only by scientifically qualified persons. The highest degree of skill and care should be required through all stages of the experiment of those who conduct or engage in the experiment.
  9. During the course of the experiment the human subject should be at liberty to bring the experiment to an end if he has reached the physical or mental state where continuation of the experiment seems to him to be impossible.
  10. During the course of the experiment the scientist in charge must be prepared to terminate the experiment at any stage, if he has probable cause to believe, in the exercise of the good faith, superior skill and careful judgment required of him that a continuation of the experiment is likely to result in injury, disability, or death to the experimental subject.

(Taken from the Wikipedia entry reprinted from Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10, Vol. 2, pp. 181-182. Washington, D.C.: U.S. Government Printing Office, 1949.)

Next was the Declaration of Helsinki developed by the World Medical Association in 1964 and The Belmont Report in the United States. It in essence expanded upon the principles of the Nuremberg Code (more on how later). In the aftermath of the Tuskegee experiment and in response to other ethical violations in human subjects research, the former United States Department of Health, Education, and Welfare (renamed to Health and Human Services) produced a document entitled “Ethical Principles and Guidelines for the Protection of Human Subjects of Research,” more commonly called The Belmont Report, for the Belmont Convention Center, where the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1974-1978) met to draft the report.

The Belmont Report identified three essential aspects of any human subjects research:

  1. Respect for persons. This principle demands that persons be treated as autonomous agents with self-determination and that persons unable to exercise autonomous self-determination (children, the mentally ill) or who could be easily coerced (prisoners) are entitled to protections for their interests. In brief, this means that research subjects must be accorded full understanding of the risks and potential benefits of participating in a research study, as well as full autonomy (i.e., the right to refuse to participate without fear of repercussions or that their medical care may be compromised).
  2. Beneficence. This principle can be boiled down to (1) do not harm and (2) maximize possible benefits and minimize possible harms. Thus, the scientific design of the experiment must not be excessively risky for participants and must have sound science behind it suggesting that a therapy is likely to be of benefit.
  3. Justice. Research must be fair, including (and potentially benefiting) as many groups as possible. Key quote: “For example, the selection of research subjects needs to be scrutinized in order to determine whether some classes (e.g., welfare patients, particular racial and ethnic minorities, or persons confined to institutions) are being systematically selected simply because of their easy availability, their compromised position, or their manipulability, rather than for reasons directly related to the problem being studied. Finally, whenever research supported by public funds leads to the development of therapeutic devices and procedures, justice demands both that these not provide advantages only to those who can afford them and that such research should not unduly involve persons from groups unlikely to be among the beneficiaries of subsequent applications of the research.”

Based on the Belmont Report, the U.S. Government eventually formulated the Common Rule (full DHHS regulations here), which mandates that all federally-funded human research and all human research designed to obtain FDA approval must be overseen by committees known as Institutional Review Boards, independent bodies that guarantee that human subjects are protected according to the Belmont Report and all applicable federal regulations.

Finally, the Helsinki Declaration represents an ongoing international effort to come up with a set of ethical principles to guide human subjects research. It is not a legally binding treaty but nonetheless now serves as the cornerstone of document in human research ethics. Despite its not being codified as a treaty, the Helsinki Declaration draws its power and authority based on how much of it has been codified into legislation and regulations in a large number of countries. The first version, released in 1964, promulgated ten principles based upon the original ten principles of the Nuremberg Code. Since then the Helsinki Declaration has undergone six revisions, the most recent of which occurred in October 2008. The first revision, in 1975, introduced the concept of an independent oversight board for all human subjects research, which influenced the requirement for an IRB in the Common Rule in the United States. In its current form, the Helsinki Declaration has introduced additional protections that are relevant to this blog:

12. Medical research involving human subjects must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and adequate laboratory and, as appropriate, animal experimentation. The welfare of animals used for research must be respected.

We have discussed this time and time again on SBM: the concept of prior probability. It is the very reason that so much research on complementary and alternative medicine (CAM) is, at the very least, of dubious ethics, especially for highly improbable modalities like homeopathy and reiki. The reason is that testing these modalities on human subjects does not conform to generally accepted scientific principles and is not based on proper preclinical experimentation. Indeed, a discussion of this very issue has broken out in the comments of a post here at SBM about a trial of massage therapy for HIV/AIDS, and I expressed grave concern over the ethics of a trial of homeopathy for infectious childhood diarrheal diseases in a Third World country.

More importantly, here’s one reason why placebo-controlled research is becoming more and more uncommmon:

32. The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best current proven intervention, except in the following circumstances:

  • The use of placebo, or no treatment, is acceptable in studies where no current proven intervention exists; or
  • Where for compelling and scientifically sound methodological reasons the use of placebo is necessary to determine the efficacy or safety of an intervention and the patients who receive placebo or no treatment will not be subject to any risk of serious or irreversible harm. Extreme care must be taken to avoid abuse of this option.

This is the very reason why most clinical trials now examine the addition of a new drug to the standard of care or a new drug versus the standard of care, rather than true placebo-controlled trials. Exceptions occur for self-limited conditions, such as hot flashes during menopause or headach therapies, but not for more serious conditions and certainly not for life-threatening conditions like cancer. Indeed, very few cancer therapy trials have a placebo control.

Considering the totality of ethical considerations that must be applied to human subjects research, it is easy to understand why we cannot always do the “definitive” experiment to answer a scientific question about the efficacy of a new therapy. For example, in surgical trials, with rare exceptions, it is not ethically acceptable to do “sham” surgery on a control group because that subjects patients to all risks and no benefit, as I have discussed before for trials of total mesorectal excision for rectal cancer and a clinical trial testing the routine use of nasogastric tubes for GI surgery.

That’s not to say that sham surgery controls haven’t been used before. Indeed, they have. For example, a randomized, sham surgery-controlled trial is how it was originally determined that poudrage for revascularizing the heart and treating angina was no better than placebo. First developed in the 1930s, poudrage was based on the concept that irritating the pericardium would provoke an inflammatory response that would lead to the influx of blood vessels into the heart muscle, and this irritation was accomplished by opening the chest and sprinkling sterile talcum powder in the pericardial sac surrounding the heart. (In the era before heart-lung bypass technology, it was not possible to consider revascularizing the heart directly.) As is the case for so many CAM modalities, results, as far as relieving the symptoms of angina, looked good–-in uncontrolled studies. Poudrage was the rage as a treatment for angina for at least a decade. Then, in the 1950s, a randomized trial was performed with a sham surgery control arm in which one group of patients had their chests opened, but no powder sprinkled in the pericardial sac around the heart. The other got standard poudrage. There was no difference in symptoms or other measures of cardiac symptomatology between the two groups. (Surgery itself is a very powerful placebo.) This was one of the very first and most famous randomized trials of a surgical intervention with sham surgery as the control. Poudrage was abandoned, and it is now relegated to being a historical curiosity in the history of surgery. It was not the only procedure for angina pectoris found when actually tested to have effects based primarily on the placebo effect of surgery.

As you may imagine, based on the latest iteration of the Helsinki Declaration, it is unlikely–to say the least–that such a study would be approved by an IRB in the U.S., although two more recent trials have been done, as this excellent discussion of the ethics of sham surgery in surgical clinical trials mentions:

There have been very few sham controlled surgical trials to date. In 1959, Cobb published a study showing no difference in improvement between patients undergoing internal mammary artery ligature versus a sham operation for treatment of angina pectoris. In recent years, two studies were done to evaluate the intracranial implantation of fetal neural cells for Parkinson’s disease. These studies had some of the study patients randomized to a sham operation that required simulating all aspects of the surgery, including the drilling of burr holes on the skull under anesthesia. In the field of orthopaedics, Moseley et al. in 2002 evaluated the effectiveness of arthroscopic surgery for arthritis of the knee. In this study, one group received a full arthroscopic debridement, one group underwent arthroscopic lavage with irrigation fluid alone, and the last group had three one-centimeter sham incisions but no actual procedure performed. This study concluded that arthroscopic surgeries done for advanced arthritis were no more effective than the sham operation.

The ethical rules laid down by a combination of the Nuremberg Code, the Belmont Report, and the Helsinki Declaration are also the reason why one of the latest talking points of antivaccinationists, a “randomized” study of unvaccinated versus vaccinated children, is ethically not permissible. The reasons are obvious. it would require that the control group of children in the placebo-controlled group be intentinoally left completely unprotected against vaccine-preventable diseases. There would also be the practical issue that antivaccine parents wouldn’t sign up for such a study because their irrational fear of vaccination would prevent them from agreeing to the possibility of randomization to the vaccination group, and parents grounded in science-based medicine would never be foolish enough to allow their children to be randomized to the no vaccine placebo group. Indeed, not too long ago an antivaccine commenter on this very blog proposed just such a study, and I told him why it would be unethical in no uncertain terms.

Meanwhile, true to form, after making a brain-meltingly specious analogy that he seems to think a slam-dunk, Generation Rescue founder and anti-vaccine zealot extraordinaire J.B. Handley makes a huge straw man out of the ethical and scientific arguments against doing a “vaxed versus unvaxed” study:

  • Looking at unvaccinated kids is either impossible (due to confounders) or unethical
  • Looking at adverse events based on giving children 6 vaccines, the way we do in the actual schedule, has never happened and never will, because of the “Helsinki Declaration”

No one ever said that comparing the rates of neurodevelopmental disorders like autism in unvaccinated children compared to vaccinated children is impossible or unethical. What we argue is that looking at unvaccinated children in the manner demanded by anti-vaccine zealots like J.B. Handley is unethical (the randomized, placebo-controlled “vaxed versus antivaxed” study some anti-vaccine zealots demand) or far more difficult to control for confounders than those as full of the arrogance of ignorance as J.B. Handley assume. Indeed, Prometheus has written three excellent posts [2, 3 (of the three, #3 is the most detailed) describing what it would take to do a scientifically and epidemioligcally rigorous study comparing neurodevelopmental outcomes in vaccinated versus unvaccinated children. Suffice it to say, that such a rigorous study is not what the antivaccine movement wants and that doing such a study would be far more complicated, expensive, and difficult than the simplistic view of our commenter or J.B., who seems to view ethical concerns as nothing more than made-up impediments to his getting his way rather than what they are: Legitimate rules designed to protect research subjects. I also suspect that J.B. knows this but keeps demanding such a study because to the ignorant it gives him the appearance of being on the scientific high ground. An alternate explanation for the insistence by anti-vaccine zealots is that the truly believe that vaccines are so harmful that it’s more ethical not to administer them than it is to administer them. In other words, to them the unvaccinated group would be less likely to incur harm) than to administer them.

As I’ve said time and time again, where the “rubber meets the road” (so to speak) in science-based medicine is the clinical trial. Part of that ethical framework, in my view and I daresay the views of my co-bloggers, is that, before subjecting any human being to the potential risks of a clinical trial, there must be nothing but the most rigorous science supporting the hypothesis to be tested, backed up by preclinical data in the form of biochemistry, cell culture, and animal studies. In addition, because the subjects of such trials are human beings, even when there is a scientifically compelling and well-supported hypothesis being tested, the welfare of the subjects must take precedence over rigorous scientific design whenever there is a conflict between the two. Over 100 years of surgical research shows that it is possible to produce compelling evidence for the efficacy of a surgical procedure without necessarily having a sham surgery placebo control group and double-blinding of the participants (given that it’s nearly always impossible to blind the operating surgeon to the operation done), but it is much more difficult. Non-blinded prospective trials or sometimes retrospective trials need to be used, requiring many more trials to show a pattern that can overcome the confounding factors inherent in such designs.

Similarly, when it comes to issues like vaccination, although we do rigorously test each new vaccine added to the current recommended vaccination schedule in accordance with the Helsinki Declaration precepts, we cannot ethically do a randomized, double-blind, placebo-controlled trial of “vaccinated versus unvaccinated” children because that would involve leaving the control group unprotected against vaccine-preventable disease. That is a failure to provide the best standard of care as the minimum that research subjects receive. In addition, such a trial would fail from an ethical standpoint in another way. Because the hypothesis that vaccines cause autism is not supported by the weight of current evidence, such a trial would be testing a scientifically dubious (at best) hypothesis, which also goes against the Helsinki Declaration. What we are left with, then, are retrospective trials or prospective non-randomized trials requiring large numbers of subjects and careful controls for confounders. The good news is that enough of these trials can be as convincing as the gold standard. Indeed, critics of the RCT sometimes argue that sound epidemiological studies, properly done, can in some cases replace the need for RCTs.

As trite as it may seem, what this all boils down to is the observation that science-based medicine is hard. Even if you leave aside the questions of how to come to a conclusion regarding the best treatment for a patient based on science, just answering a single question requires not just loads of preclinical data in cell culture and animal models, but well-designed clinical trials. But a well-designed clinical trial is not enough; it has to be ethical. Moreover, the bioethics has been in general evolving towards ever more protection of human research subjects. Indeed, I’m reminded of an HBO movie Something The Lord Made (an excellent, albeit a bit conventional and predictable, movie about one of the “gods” of surgery Alfred Blalock and the poor black carpenter whom he hired as an assistant whose surgical skills were invaluable to Dr. Blalock in achieving what he did). The movie dramatized surgical research in the 1940s, the development of the Blalock-Taussig shunt, and the relationship between Blalock and Thomas. Watching the movie, I remember being amazed at how little it took in the way of discussion and permissions for Dr. Blalock to try out his procedure on infants. Of course, there was no procedure to save such infants back then, who inevitably died, and his assistant Vivien Thomas and he had worked out the procedure in dogs, but the difference between then and now was striking. Indeed, studies that were approved 20 or even 10 years ago might not be approved today.

Although no one, least of all I, would argue that human subjects protections are what they should be (which may be the topic for a future post), I find that comforting.

Posted in: Clinical Trials, Medical Ethics, Politics and Regulation, Surgical Procedures, Vaccines

Leave a Comment (19) ↓

19 thoughts on “Human subjects protections and research ethics: Where the rubber hits the road for science-based medicine

  1. Peter Lipson says:

    Dude, this is a seriously cool post.

  2. Harriet Hall says:

    “arthroscopic surgeries done for advanced arthritis were no more effective than the sham operation.”

    Compare to recent acupuncture studies where they concluded that if true acupuncture is no more effective than sham acupuncture, that must mean that the sham procedure is effective too. If we followed their faulty reasoning, we would conclude that sham surgery is effective and we would keep doing it. And it is effective in eliciting a placebo response. I read that one subject in the knee arthritis study was told after the study that he had been in the sham surgery group but he still refers to it as “the operation that cured me.”

  3. TsuDhoNimh says:

    http://www.azcentral.com/community/glendale/articles/2009/04/27/20090427seizure0427.html

    It’s a double-blind, placebo controlled (sort of) study of seizure drugs given by paramedics. Obviously it’s unethical to not give the drugs at all, and what they are comparing is how well IM (faster) works compared to IV.

    Each patient will receive two types of medicine: One will be administered intravenously – in the vein – and one will be administered intramuscular – in the muscle.

    In each case, one method will deliver an active medication and one method will deliver a placebo.

    Paramedics will not know which medicine is active and which is a placebo, so the two medications will be administered simultaneously.

  4. TsuDhoNimh says:

    Faster – I meant faster to administer.

  5. pec says:

    [using the RCT to test therapies that are incredibly implausible on a strictly scientific basis (homeopathy or reiki, for instance) inevitably leads to numerous “false positives” ]

    One person’s implausible is another person’s plausible. How could you possibly think absolute criteria for plausibility could ever be agreed on? This is just a tactic to prevent alternative theories from being tested.

  6. cheglabratjoe says:

    pec,

    Surely you’re kidding, right? No one thinks anything like “absolute criteria” for plausibility could ever be known. You’d need to know everything about everything beforehand to make a judgment like that.

    But, lack of absolute plausibility doesn’t mean we can’t make a good estimate of plausibility. If there’s an antibiotic engineered to attack a pathogen that works in animal models, the probability that it’ll work in humans is pretty good. If someone ignores the last few centuries of scientific progress and thinks we should induce vomiting to balance people’s humours, the probability that it’ll work is extremely low.

    Moreover, if you’re correct that all plausibility is relative, how do you propose that we decide where to allocate research dollars? After all, I’m sure we could find someone who thinks the four humours are real and would love some NIH coin! You wouldn’t dare just use some tactic to prevent his/her research, would you?

  7. Eric Jackson says:

    Plausibility would be a body of established literature showing that there is a means for which an intervention should work in humans. In the case of a pharmaceutical intervention this would include a process along the lines of a known biochemical pathway involved, an agent that is known to alter it by being established with in vitro models and binding assays, and demonstrated efficacy in animal models.

    Those animal models and in vitro experiments are in turn supported by raw biochemistry, which is built upon the last few hundred years of work in chemistry and physics.

    If you try to put a homeopathic solution into the established framework governing how biochemical systems work – things like Gibbs free energy equations, and Michaelis-Menton enzyme kinetics (which would be about the simplest you could find), what comes out is absolute garbage – things like binding coefficients so high it would make the substances the most biologically active things in all creation, or just flat out creating energy out of thin air.

    Basically in order to make homeopathy work, you have to throw out the way solutions behave in order to make the law of infinitesimals work, and then on top of that you have to throw out basically everything written in a biochemistry textbook to make the law of similars work. Since each of these in turn is supported by the entire mass of physics, thermodynamics and chemistry established over the last, say, 200 or more years. Attempts to do this have produced a steaming pile of refuse miles wide, a mixture of just flat out bad lab technique coupled with a clear demonstration of absolutely no understanding of the physical chemistry techniques used.

    Really, I lack the words to describe things like PMID: 17004404 where the ‘water memory’ effect is claimed to be a result of quantum macro-entanglement between the patient, the homeopathic water and the person who mixed it.

    What? I mean really, just what??

    David Gorski – This entry has to be one of the absolute finest things I have ever had the privilege of reading. This should go in the first chapter of every text book used in every medical or pre-med program on the planet. This sort of ethics material and experimental design is something I might go so far as to say every student even thinking of going into a health and science related field should be exposed to.

  8. KarlS says:

    Regarding it’s hard to do good and ethical research. The ethical status of a study can sometimes change midway through it. Consider the example where a cancer vaccine administered with an immune stimulant is tested against the immune stimulant without vaccine. Each arm gets combination chemotherapy to first induce a remission. The endpoint is PFS. The question of course is does vaccine make a difference?

    Midway through the study the induction chemo protocol is proven inferior (with high confidence) to another in a head-to-head study. Should the vaccine study continue or be modified?

    Participants going forward will receive a first line induction therapy (a first primary therapy) that’s proven inferior to another for the chance of getting an unproven vaccine.

  9. Versus says:

    A wonderful post! There is something I (the non-scientist) have never been able to understand — how do the CAM experiments get around the requirement that experiments on human subjects be based on good science? Do they get a pass on this?

  10. daedalus2u says:

    There is a large distinction in the legal rules and regulations between what is considered “treatment” and what is considered “research”.

    “Research” is the attempt to attain information that can be generalized. “Research” on whether aspirin helps a headache requires approval of an IRB. Giving someone aspirin to treat a headache doesn’t.

    Doing unapproved “research” is a crime. Giving useless and even harmful “treatments” is simply bad care.

  11. Versus says:

    @deadalus2u: not sure you were trying to answer my question, but I still don’t get it. What about all the research on human subjects involving CAM — how do they get past the IRBs if the research is not based on good science? How is it that a researcher gets to stick needles in someone or crack a person’s back if there is no plausible science behind acupuncture or chiropractic?

  12. qetzal says:

    Versus,

    I think the answer is that most IRBs don’t really scrutinize scientific rationale all that closely. They pay much more attention to things like the planned clinical procedures, safety monitoring, informed consent, and the like.

    I admit my direct experience is limited, but that’s been my experience.

  13. daedalus2u says:

    Versus, I think the way that CAM proponents get around the ethical difficulties is by having inadequate IRBs (in other words they behave unethically). The IRB is only as good as the stake-holders that comprise it.

    It is similar to the utility of laws. Honest people (for the most part) behave honestly independent of what the “law” says. Dishonest people behave dishonestly even if there are laws against such behavior.

    Ethical medical researchers consider the possibility that their hypothesis of a positive treatment effect is wrong and so they take great pains to be up front about the risks. Unethical medical researchers don’t.

    It is analogous to the way of thinking that Thomas More expressed in A man for all seasons regarding giving the benefit of the law to the Devil.

    William Roper: So, now you give the Devil the benefit of law!
    Sir Thomas More: Yes! What would you do? Cut a great road through the law to get after the Devil?
    William Roper: Yes, I’d cut down every law in England to do that!
    Sir Thomas More: Oh? And when the last law was down, and the Devil turned ’round on you, where would you hide, Roper, the laws all being flat? This country is planted thick with laws, from coast to coast, Man’s laws, not God’s! And if you cut them down, and you’re just the man to do it, do you really think you could stand upright in the winds that would blow then? Yes, I’d give the Devil benefit of law, for my own safety’s sake!

    The analogy is that because one cannot be absolutely certain of God’s law, one defaults to Man’s law which one can be much more certain of (because they are written down). The implementation of Man’s law is not done by isolated individuals, but by a legal system which (in theory) is self-correcting and much less subject to error. When the legal system is working properly, any culpability for wrong decisions is shared. Having a system of laws and a legal system isn’t magic, it only “works” when the individuals in it have the appropriate knowledge and behave appropriately. It is the same with an IRB. It only “works” when the members of it have the appropriate knowledge and behave appropriately.

    Because one cannot be absolutely certain if an experimental procedure will be beneficial or not (in which case research would not be necessary), one defaults to the schemes that Dr Gorski has laid out. The various hoops are there not just to protect the subjects, but also to protect the researchers. Like any powerful system it can be abused by individuals not behaving properly. A legal system making rulings according to individual whims has the potential for great harm, as does an IRB making decisions according to individual whims. Going through the motions doesn’t make for good decisions. In science that is called Cargo Cult Science. In the other systems it is called a Kangaroo Kourt or a Rubber Stamp IRB.

  14. Versus says:

    Thanks quetzal and deadalus2u for your answers to my question. For what it’s worth, there is a mechanism for reporting violations of human subjects research protections. According to the Dept. of Health and Human Service website, the “OHRP’s [Office of Human Research Protection] Division of Compliance Oversight (DCO) reviews institutional compliance with the federal regulations governing the protection of human subjects in HHS-sponsored research 45 CFR 46.
    DCO evaluates all written substantive allegations or indications of noncompliance with the HHS regulations. If complaints or concerns arise regarding an institution’s human subject protection practices, OHRP opens a formal evaluation and, if necessary, requires corrective action by the institution. ” See http://www.hhs.gov/ohrp/compliance/. Perhaps we should all start filing complaints with the DCO each time a CAM study involving human subjects is announced.

  15. qetzal says:

    Another factor to consider – at least some of these trials get funded by NCCAM. If anyone was going to pass judgement on the scientific rationale of a proposed trial, it should be the funding agency.

    Of course, we know that NCCAM doesn’t operate that way. But if an IRB did object to such a funded study, the PI could simply point out that NCCAM approved it. I think few IRBs would continue to object at that point.

    Similarly, if an NCCAM-funded study was reported to OHRP solely for inadequate scientific rationale, I predict they’d decline to act for the same reason.

  16. dr treg says:

    “Moreover, the bioethics has been in general evolving towards ever more protection of human research subjects”.

    You seem to forget that thankfully there is more protection for animal research subjects aswell whose suffering you seem to dismiss as animal models”.

  17. David Gorski says:

    Um, no I haven’t. It just wasn’t the topic of this post, which was long enough already without adding animal models to the mix. Indeed, I did a long post about animal research last year:

    http://www.sciencebasedmedicine.org/?p=61

    Not my fault if you haven’t seen it, but it is your fault to jump to conclusions about my views without actually knowing what those views are.

  18. Esattezza says:

    How is it that you always manage to post on a topic I’m about to write a paper on? (And can you continue to do so for the rest of my academic career? PLEASE!)

Comments are closed.