On the “individualization” of treatments in “alternative medicine,” revisited

As I contemplated what I’d like to write about for the first post of 2012, I happened to come across a post by former regular and now occasional SBM contributor Peter Lipson entitled Another crack at medical cranks. In it, Dr. Lipson discusses one characteristic that allows medical cranks and quacks to attract patients, namely the ability to make patients feel wanted, cared for, and, often, happy. As I (and several of us at SBM) have said before, it’s not necessary to invoke magic, quackery, or pseudoscience in order to show empathy to patients and provide them with the “human touch” that forges a strong therapeutic relationship between physician and patient and maximizes placebo effects without deception. In the old days, this used to be called “bedside manner,” but in these days of capitation and crappy third party payor reimbursement it’s very difficult for physicians to take the time necessary to listen to patients and thereby build the bonds of trust and mutual respect that can augment the treatments that are prescribed. Unfortunately, because of this the quacks have been all too eager to leap into the breach.

One aspect of this tendency of medical cranks is to claim that they somehow “individualize” their treatment to the patient, as Peter points out:

There are a number of so-called holistic doctors in town who claim to practice “individualized” medicine. What this really means isn’t clear. My colleagues and I certainly individualize the treatment plans for all of our patients, using data gleaned from decades of scientific studies of large groups of patients. What “individualized” care seems to mean in this other context is “stuff I made up to make that patient feel more unique and special.”

I put it slightly differently when I wrote about the very same phenomenon a couple of years ago. Having come across a “success” story told by an “alternative practitioner” in which a patient was shuttled from treatment to treatment to treatment over a long period of time before finally getting better on her own (an improvement that, predictably, the last practitioner, a homeopath, took credit for), I described the problem with “individualized” treatments in “alternative” medicine, “complementary and alternative medicine” (CAM), “integrative medicine” (IM), or whatever you want to call it:

Here’s the problem with “individualized” treatments. Taken to an extreme, as many alternative medicine practitioners do, “individualization” becomes in essence an excuse to do whatever the heck the practitioner feels like and not to have to list diagnostic criteria or show actual efficacy of their treatments in a way that others can replicate. Look at Dr. Elliott’s statement: “Each patient was so unique that what cured one person might have no effect on the next.” Certainly, organisms such as humans can and do show considerable variability in their biology and response to treatment, but rarely so much that what “cures” one person will have no effect on the next. Such extreme emphasis of “individualization” is virtually custom-designed to lead to exactly the sort of marathon trial-and-error treatment histories he described.

A corollary to this claim of extreme “individualization” by CAM practitioners such as homeopaths is the frequent assertion by promoters of non-science-based medical treatments that science- and evidence-based medicine (SBM/EBM) can’t study their woo because its “gold standard” for determining whether a treatment “works” is the controlled randomized clinical trial (RCT) and RCTs can’t study therapies that are so “individualized.” This assertion is, of course, a Tokyo-sized straw man being gleefully destroyed by Godzilla-sized woo in that SBM/EBM is not just about RCTs (particularly SBM). It’s also just plain wrong in that it is indeed quite possible to come up with designs that take into account patient individualization. True, it’s more difficult, but it’s by no means impossible. Nonetheless, the cry that “RCTs can’t study my woo because it’s ‘individualized’” continue to go up in the alt-med blogosphere, most recently when a homeopath named Judith Acosta posted just such an argument at—where else?—that wretched hive of scum and quackery, The Huffington Post in the form of a two-part post entitled A personal case for classical homeopathy:

The problem is that homeopathy is aimed at treating the individual with a single remedy, chosen specifically for him or her. It is not for treating masses of people with the same pill. Twenty people could have the “same” flu, but each one would need a different remedy (not necessarily Oscillococcinum) and be rightly cured because each one would manifest illness in a way that is utterly unique to him-/herself. We always treat the person, not the disease. As such it is exceedingly difficult, if not impossible to replicate homeopathic treatment the way pharmaceutical companies try to do in drug trials.

Yes, Acosta is actually making the argument that individuals are so completely unique that no two of them will require the same treatment for the same disease, just as like the example I used two years ago from Dr. Travis Elliot, who could actually proclaim with apparently a straight face that “each patient was so unique that what cured one person might have no effect on the next.” Of course, the silly part of this extreme “individualization” of treatment is that there are really no commonly agreed-upon standards based in objective evidence to guide practitioners in “individualizing” therapy. It’s not for nothing that I refer to this sort of “personalized medicine” as “making it up as you go along,” a spectrum that ranges from the sort of claims a homeopath like Acosta makes to the way Dr. Stanislaw Burzynski co-opts the language of genomics in order to justify his own version of “make it up as you go along”-style “personalized medicine.”

Faux “individualization” versus science

Part and parcel of this faux “individualization” advocated by various CAM promoters is an intense need to attack EBM/SBM as the enemy of such “individualization.” At least, that is the rationale. In reality, practitioners of pseudoscientific medical systems and treatments recognize, either implicitly or explicitly, that whenever their woo is tested through the scientific method and RCTs it fails. A real scientist or practitioner of SBM when faced with such a result would abandon the therapy that fails scientific validation. These pseudoscientific treatments, however, are more about belief than science, and when belief collides with science the believer must somehow find a reason to discount or reject science. That is the reason for the extreme hostility to EBM/SBM among physicians and scientists who ascribe to pseudoscientific or mystical belief systems like homeopathy, energy healing, acupuncture, traditional Chinese medicine, Ayurveda, and other prescientific medical belief systems that cling to medicine like kudzu and whose roots slowly destroy the scientific basis of medicine. I’ve seen this many times before and have even addressed it at least a couple of times on this blog, for instance, when Dr. Andrew Weil launched a rather furious broadside at EBM just last year in which he used the common straw man that EBM is only about RCTs.

I saw a more sophisticated version of the sorts of attacks on SBM/EBM made by apologists for quackery just before the holidays in the form of two articles. One appeared on Gaia Health and was entitled Evidence-based medicine is a fraud. Here’s why. It was based on an article voicing similar sentiments that appeared on, which is a form of megavitamin supplementation quackery embraced by Linus Pauling in his later years when he became enamored of the concept that he could cure cancer and the common cold with enormous doses of vitamin C, and advertises its love of “individualization” and “personalization” in its slogan, “Therapeutic nutrition based upon biochemical individuality.” This slogan amuses me to no end, given that the motto of orthomolecular medicine seems to be, “If some vitamins are good, more must be better. A lot more.” In any case, the other article is by Steve Hickey, PhD and Hilary Roberts, PhD and entitled Evidence-Based Medicine: Neither Good Evidence nor Good Medicine. Combined, these articles invoke a collection of straw man arguments, obvious and simple criticisms of EBM that do not come close to invalidating the usefulness of EBM, and a hilariously inapt analogy, all in a lecturing tone, complete with “lessons” in statistics. In particular, these articles implicitly and explicitly argue for the inclusion of “all data,” including lousy data, the purpose of which, obviously, is to lower the bar for evidence for the pseudoscience and pseudomedicine they want to promote.

I’ll show you what I mean.

Let’s start with the Gaia Health article first, because it’s much easier to dispose of because (1) it’s not original, parroting as it does, the arguments of the the article, and (2) it dumbs down even its already transparent source material. In fact, it can best be summarized by its conclusion:

At best, Evidence Based Medicine is pseudo science or junk science. It’s a fraud designed to give the impression that statistics derived from studies can possibly tell us much of value about how to deal with or treat an individual human.


The reality, though, is that EBM is fraudulent. It gives the impression of proof for efficacy of medical treatment, but is largely a smokescreen designed to sell medical products.

This latter charge, of course, smoked my irony meter, melting it into a quivering, bubbling blob of liquid plastic and copper. The reason is that attacking EBM in this manner is largely a smokescreen designed (1) to give the appearance that pseudoscience is legitimate science by attacking legitimate science and (2) thereby to sell products and services, such as supplements and treatments like acupuncture. Predictably, the first attack added to the attacks by Hickey and Roberts that are repeated is, in essence, the “pharma shill gambit“:


hat, of course, is the bottom line. It’s why the term EBM is invoked—to give the impression that medical treatments are based on meaningful research. The purpose isn’t to produce research that benefits patients. The purpose is to produce research that benefits the pockets of Big Pharma and Big Medicine.

No one who defends EBM/SBM claims that it is perfect or denies that there are problems with it and that money from pharmaceutical companies can be a malign influence that often doesn’t serve science, medicine, or patients. However, from the criticisms in these articles, it’s quite obvious that, even if all pharma money were removed from the drug approval process, even if profit were not a consideration ever, none of the authors of the articles would still approve of EBM. The reason is simple. They are interested in promoting medical modalities that are not supported in science or evidence and want to find a way to make it seem as though they are. The way to do that is to attack EBM as currently configured, often (as in this case) invoking the “individualization” gambit. For instance, the Gaia Health article rails against the “one size fits all” medicine, which is all well and good, except that the very essence of practicing EBM is applying what is known about clinical trials to individual patients, something Peter has emphasized time and time again. More on that later. In the meantime, let’s look at the article that inspired it all.

Hickey and Roberts make their disdain for EBM apparent right from the opening paragraphs:

Evidence-based medicine (EBM) is the practice of treating individual patients based on the outcomes of huge medical trials. It is, currently, the self-proclaimed gold standard for medical decision-making, and yet it is increasingly unpopular with clinicians. Their reservations reflect an intuitive understanding that something is wrong with its methodology. They are right to think this, for EBM breaks the laws of so many disciplines that it should not even be considered scientific. Indeed, from the viewpoint of a rational patient, the whole edifice is crumbling.

The assumption that EBM is good science is unsound from the start. Decision science and cybernetics (the science of communication and control) highlight the disturbing consequences. EBM fosters marginally effective treatments, based on population averages rather than individual need. Its mega-trials are incapable of finding the causes of disease, even for the most diligent medical researchers, yet they swallow up research funds. Worse, EBM cannot avoid exposing patients to health risks. It is time for medical practitioners to discard EBM’s tarnished gold standard, reclaim their clinical autonomy, and provide individualized treatments to patients.

No, no, no, no, no.

This is a massive straw man, a caricature of EBM. For example, no one (at least no one whom I know or whose work I read) claims that EBM “mega-trials” can find the cause of disease. That’s not what they are designed for; they’re designed to determine whether therapies are efficacious and safe. Determining the cause of disease depends on a combination of basic science, clinical observations, and clinical trials. In other words, it depends upon the totality of scientific evidence, of which clinical trials are just a part. It doesn’t take Hickey and Roberts long to complain about how badly they think Linus Pauling was treated for his advocacy of vitamin C quackery. I don’t know about you, but when I see complaints like this, I know I’m dealing not with a science-based critique of anything, but rather cranks complaining that they are not taken seriously. In fact, when I see them complain that EBM in practice means “relying on a few large-scale studies and statistical techniques to choose the treatment for each patient,” right before complaining about how Linus Pauling was criticized for using megadoses of vitamin C to treat cancer, I can’t help but think their real purpose is incredibly obvious.

Legitimate versus illegitimate complaints about EBM

So what complaints against EBM are encompassed in this article? Remember, several of us on this blog criticize EBM fairly frequently, particularly Kimball Atwood. You might think that, even in this article, we might find something to agree with. You’d be (mostly) wrong. Hickey and Roberts, amazingly, are highly talented at making what I like to call “Well, duh!” criticisms of EBM. For instance, they make a great show of pointing out that “statistically significant” doesn’t necessarily mean “significant.” As I said, “Well, duh!” That’s a very basic principal that virtually every physician knows. In fact, when we discuss clinical trials in various venues, we argue all the time about whether a “statistically significant” result is clinically significant when the difference is small. We discuss these sorts of issues on SBM all the time. Yet Hickey and Roberts seem to think they are the first people to have noticed that EBM can sometimes detect differences that are probably not clinically significant. In fact, it’s painfully obvious that neither Hickey nor Roberts is a physician or has ever taken care of a patient in their lives from this incredibly simplistic analogy that they seem to consider highly profound:

To explain this, suppose we measured the foot size of every person in New York and calculated the mean value (total foot size/number of people). Using this information, the government proposes to give everyone a pair of average-sized shoes. Clearly, this would be unwise-the shoes would be either too big or too small for most people. Individual responses to medical treatments vary by at least as much as their shoe sizes, yet despite this, EBM relies upon aggregated data. This is technically wrong; group statistics cannot predict an individual’s response to treatment.

Well, yes and no. Group statistics can’t predict precisely an individual’s response to treatment, but if a clinician combines group statistics, biomarkers, and considerations of the patient’s other clinical variables, it is possible to estimate whether a treatment is likely to work in a patient and what the odds are that it will work. Hickey and Roberts, for all their invocations of complexity later in their article, seem very prone to binary thinking. To them, either a treatment works or it doesn’t, and they think that EBM results can’t inform or predict whether a treatment will work. This is also nihilistic thinking, in which Hickey and Roberts fall for the “fallacy of the perfect solution.” To them, if EBM isn’t perfect, then it’s crap. If it’s not painfully obvious how to apply RCT results to individual patients, then RCTs are crap.

This same sort of simplistic thinking infuses the entire analogy above, which seems profound on a quick superficial reading but if you look at it more closely you’ll see that it’s a parody, a straw man, if you will, of how EBM is practiced. In the analogy used above, the way that EBM would treat it would be to try to estimate how many people fall into different ranges of shoe sizes and then to buy a range of shoe sizes to encompass as much of the population as possible in the right proportions. Think of it as taking into account other clinical indicators, biomarkers, and the patients’ clinical characteristics. Is that solution perfect? No, of course not. Will there be people whose shoe sizes won’t be accommodated? Of course, but they will (usually) be in the minority.

The next thing that Hickey and Roberts doesn’t like about EBM is that it “selects” evidence. They complain about how meta-analyses leave out studies that don’t meet strict criteria for study quality and how EBM relies on “best evidence,” as though that were a bad thing. They present an example in which a graph is fit to a curve by removing data and lecturing us that “one of the first lessons for science students is to not select the best evidence; all data must be considered.” Again, this is a misleading comparison. It is true that we don’t want to “cherry pick” evidence, which is what Hickey and Roberts are referring to, but on the other hand evidence that is less reliable should be deemphasized or thrown out, and that’s all that meta-analyses and the EBM emphasis on “best evidence” do. One can argue about what is defined as “best evidence,” and, in fact, several of us have criticized EBM’s reliance on frequentist statistics such as what cause Hickey and Roberts so much agita. Indeed, many are the times that we’ve complained how EBM’s “best evidence” overemphasizes RCTs and downplays basic science considerations and prior plausibility. Somehow, though, I strongly suspect that Hickey and Roberts aren’t about taking these aspects into account. Rather, they seem, more than anything else, to be about including anecdotal and observational evidence, hence the emphasis on including “more” kinds of evidence.

Perhaps the most ridiculous argument Hickey and Roberts make is this one:

The problems with EBM continue. It breaks other fundamental laws, this time from the field of cybernetics, which is the study of systems control and communication. The human body is a biological system and, when something goes wrong, a medical practitioner attempts to control it. To take an example, if a person has a high temperature, the doctor could suggest a cold compress; this might work if the person was hot through over-exertion or too many clothes. Alternatively, the doctor may recommend an antipyretic, such as aspirin. However, if the patient has an infection and a raging fever, physical cooling or symptomatic treatment might not work, as it would not quell the infection.

In the above case, a doctor who overlooked the possibility of infection has not applied the appropriate information to treat the condition. This illustrates a cybernetic concept known as requisite variety, first proposed by an English psychiatrist, Dr. W. Ross Ashby. In modern language, Ashby’s law of requisite variety means that the solution to a problem (such as a medical diagnosis) has to contain the same amount of relevant information (variety) as the problem itself. Thus, the solution to a complex problem will require more information than the solution to a straightforward problem. Ashby’s idea was so powerful that it became known as the first law of cybernetics. Ashby used the word variety to refer to information or, as an EBM practitioner might say, evidence.

While this is an interesting speculation, Hickey and Roberts do not present any evidence to suggest that cybernetics control and communication apply to biological systems in this way. In any case, whenever you see someone trying to apply a “fundamental law” of one field to another field, be very, very skeptical. Ashby’s law, as described above, is at best tangential to the problem of taking care of patients in that physicians applying EBM already do take into account more information to solve complex problems than they do to solve simple problems. The same is true of designing clinical trials to test treatments for complex versus simpler clinical problems. In fact, take a look at the “levels of evidence” paradigm of EBM. That’s hardly “simple.” Take a look at some actual EBM guidelines. Muy favorite example is the National Comprehensive Cancer Network (NCCN) guidelines published for nearly every cancer. I use the NCCN guidelines for breast cancer because I’m most familiar with them. The 2011 guidelines take up 148 pages, packed with text, graphs, decision trees and discussions of areas of uncertainty:

I intentionally picked one of the simpler sets of guidelines for breast cancer.

Hickey and Roberts make it sound as though applying EBM is as simple as looking at a clinical trial or two, taking the results, and applying them to a patient. EBM might well have its deficiencies, but Hickey and Roberts, intentionally, I believe, make EBM seem simplistic to the point that it paints physicians as simpletons, when in reality applying EBM guidelines like the ones above is anything but simple. It requires clinical judgment and the ability to fit an individual patient into our knowledge base and determine what will likely be the best treatment, both of which require a deep understanding of the clinical evidence. Even if Ashby’s law applied to human disease, I would argue that EBM guidelines roughly follow it. Hickey and Roberts’ parody of EBM is intentionally designed so that it does not.

It doesn’t exactly help my confidence in Hickey and Roberts to see that they don’t quite seem to understand what the ecological fallacy is. The describe it as “wrongly using group statistics to make predictions about individuals.” This is not quite correct. I’ve written about the ecological fallacy before, and that’s not exactly what it means. In general, the ecological fallacy is used in epidemiology and tends to refer to making group-level analysis and imputing predictions for individuals based on it. In epidemiology, this means taking large group averages for which individual-level data is not available and trying to make correlations based on them. However, doing this can introduce bias and exaggerate apparent correlations compared to doing the same analysis using individual-level data. The reason the ecological fallacy can be a problem in making inferences is because of confounding factors that might be the real explanation for any correlations observed. In other words, the problem with the ecological fallacy is that it fails to take adequately into consideration confounding factors, which can be more easily done with individual-level data. RCTs, in contrast, are carefully designed and controlled and use individual-level data. While it’s not entirely wrong to be concerned about applying the results of RCTs to individual patients, to refer to doing so as the “ecological fallacy” (i.e., wrongly attributing group-level correlations to individuals) is a bit of a stretch. RCTs, after all, are not the sort of group-level comparisons that the ecological fallacy refers to.

Having identified what they think to be the problems with EBM, Hickey and Roberts then converge upon a solution that destroyed another of my irony meters:

Doctors must encompass enough knowledge and therapeutic variety to match the biological diversity within their population of patients. The process of classifying a particular person’s symptoms requires a different kind of statistics (Bayesian), as well as pattern recognition. These have the ability to deal with individual uniqueness.

As I’ve pointed out, Kimball Atwood has extensively argued for Bayesian statistics. Part and parcel of Bayesian statistics is estimating prior probability based on basic science. Little do Hickey and Roberts realize that a proper application of Bayesian statistics to the sorts of treatments encompassed by CAM would not help them. Not at all. There’s a reason for an even greater hostility towards the concept of SBM than towards EBM among CAM promoters. The Bayes factor for homeopathic remedies, for instance, would conclude that the prior probability of its working approximates zero based on its claimed principles of action. Ditto reiki, therapeutic touch, and basically any form of “energy healing.” And, I might add, although its prior probability is not at homeopathic levels given that it involves giving actual chemical substances, Bayes would not be kind to orthomolecular medicine either, which is very good at sounding scientific but in practice boils down to giving megadoses of vitamins and other nutrients. As for “pattern recognition,” who knows what they mean by that? Actually, on second thought I think I do know what they mean by that. As I like to say, to purveyors of woo, “pattern recognition” means seeing what they want to see based on anecdotal “clinical experience” or small clinical trials and acting based on that.

In fact, this passage right here leads me to think their attitude towards data is at the very least confused and at the very worst intentionally deceptive:

Population statistics do not capture the information needed to provide a well-fitting pair of shoes, let alone to treat a complex and particular patient. As the ancient philosopher Epicurus explained, you need to consider all the data.

Restricting our information to the “best evidence” would be a mistake, but it is equally wrong to go to the other extreme and throw all the information we have at a problem. Just as Goldilocks in the fairy-tale wanted her porridge “neither too hot, nor too cold, but just right” doctors must select just the right information to diagnose and treat an illness. The problem of too much information is described by the quaintly-named curse of dimensionality, discussed further below.

Later, they write:

In their models and explanations, scientists aim for simplicity. By contrast, EBM generates large numbers of risk factors and multivariate explanations, which makes choosing treatments difficult. For example, if doctors believe a disease is caused by salt, cholesterol, junk food, lack of exercise, genetic factors, and so on, the treatment plan will be complex. This multifactorial approach is also invalid, as it leads to the curse of dimensionality. Surprisingly, the more risk factors you use, the less chance you have of getting a solution. This finding comes directly from the field of pattern recognition, where overly complex solutions are consistently found to fail. Too many risk factors mean that noise and error in the model will overwhelm the genuine information, leading to false predictions or diagnoses.

Yes, you read it right. Hickey and Roberts are simultaneously criticizing EBM for being too simplistic but at the same time criticizing it for being too complex and making it difficult to make treatment decisions. Which is it? Who knows? I rather suspect that EBM is too simple or too complex for them depending on what they need it to be in order to justify their disdain for EBM and their support for the pseudoscience of “orthomolecular medicine.” In any case, just as we don’t need to invoke pseudoscience and quackery to provide patients with the “human touch” while providing care, similarly we don’t need to use “pattern recognition” of the kind that Hickey and Roberts apparently mean (i.e., whatever seems to support their use of various forms of CAM) in order to improve the fineness with which we tailor our treatments to patients based on EBM. In fact, with the new era of genomic medicine, what we are now faced with is a flood of information whose application to patients is proving to be exceedingly difficult. How would Hickey and Roberts deal with this problem? They don’t say, probably because providing real solutions is not their intent. Attacking EBM is, because EBM stands in the way of their practicing pseudomedicine.

Hickey and Roberts conclude:

Personalized, ecological, and nutritional (orthomolecular) medicines are converging on a truly scientific approach. We are entering a new understanding of medical science, according to which the holistic approach is directly supported by systems science. Orthomolecular medicine, far from being marginalized as “alternative,” may soon become recognized as the ultimate rational medical methodology. That is more than can be said for EBM.

I’ve lost track of how many times I’ve seen this claim, which suggests to me that early 2012 might well be the time to address it in depth. After all, practitioners as diverse as Hickey and Roberts and even Stanislaw Burzynski himself make it, and, although I started to address such claims when discussing Burzynski, I need to do a more general post about this. The specific claim needs a bit of discussion, perhaps in a “part III” of this series, is that somehow CAM is “personalized” and akin to new findings in systems biology. The co-opting of systems biology by woo has been a personal sore point with me for a while and Burzynski’s “personalized gene-targeted cancer therapy” just brought to the fore again last month.

In the meantime, I’ll conclude by pointing out that attacks on EBM/SBM by CAM apologists serve multiple purposes. Because CAM practitioners can’t provide strong evidence of efficacy, they have to attack the system of science that shows their woo doesn’t work. In addition, the claims of extreme “individualization” as compared to EBM serve the purpose of providing a convenient excuse to use when CAM fails when tested scientifically. After all, have you ever seen a CAM proponent complain that a positive trial of CAM was not adequately “individualized” and therefore is invalid? Of course not! “Individualization” is only invoked when convenient, to explain failure. Then the claims of extreme “individualization” serve marketing purposes, catering to the desire that people have of wanting to feel as though they are special and unique, and stroking their egos, portraying following EBM guidelines as somehow being akin to being a mindless follower as opposed “thinking for themselves.” The problem is, this individualization isn’t individualization based on science and a better understanding of individual biology. It’s an “individualization” that means “making it up as you go along.” EBM has its deficiencies, and we’ve discussed them frequently on this blog, but in the end I’d take it any day over the faux “individualization” promoted by the Hickey and Roberts and their fellow travelers.

Posted in: Basic Science, Clinical Trials

Leave a Comment (32) ↓

32 thoughts on “On the “individualization” of treatments in “alternative medicine,” revisited

  1. DW says:

    It would be interesting to study the extent to which the “individualized” claim for alternative medicine is even true. It’s an advertising claim really. First you’d have to define “individualized” in some coherent way, then study what exactly these alternative practitioners are actually doing with their patients that differs – if it does – from what other physicians are ordinarily doing. Surely there are some measureable variables – do the alternative practitioners spend more time with clients? That is study-able, but it doesn’t necessarily mean the treatment is more “individualized.” As to what remedies are actually prescribed for given conditions, that too is certainly study-able.

    I would strongly suspect that the claim is bogus. Despite the stories about how every patient with an ordinary flu is different, my guess is, you’d find the alt-practitioner prescribing pretty much the same thing to everyone, after all. I mean, patients don’t know what was prescribed to the person before them and the person after them, right? They’re TOLD their treatment is individualized, but how would they know the difference? I bet the claim to spend more time per patient would also go up in a puff of smoke once you measured it.

    Someone should do this research. Of course, it wouldn’t convince fans of alt med of anything … but it might get closer to the question of what exactly alt med fans *want* that they actually *get* from their favorite purveyors of woo. I think it’s something other than “individualized” treatment – a sense of themselves as special, perhaps?

  2. DrRobert says:

    Dr. Gorski, I think this is one of your best posts. This is an absolutely fantastic read.

    In 2010, Brien et al found that in the case of homeopathy, patients actually benefited from the lengthy and involved consultation process, where as patients who received homeopathy with no lengthy consultation process had no improvement over placebo.

    So, as you said, the placebo effect in alt-med is largely due to having someone just spend a lot of time with the patient. Of course, this may only lead to subjective and not objective improvements.

    (Brien, S.; Lachance, L.; Prescott, P.; McDermott, C.; Lewith, G. (2010). “Homeopathy has clinical benefits in rheumatoid arthritis patients that are attributable to the consultation process but not the homeopathic remedy: A randomized controlled clinical trial”. Rheumatology 50 (6): 1070–1082.)

    Ernst wrote, regarding this finding:

    “Proponents of homeopathy insist that this is a contradiction. Moreover, they claim that the clinical trial is an inadequate research tool for testing their treatment and that therefore the true picture is provided by the observational data. But the much more logical conclusion is what Brien et al. have now demonstrated experimentally: patients benefit from a long and empathic encounter with a homeopath but not from the remedy. Homeopaths might argue that these results prove that homeopathy, even though it is not efficacious, is nevertheless effective. . But I fear that this would be misleading: the effective element is not specifically homeopathy but the therapeutic relationship in general.

    The recognition of the therapeutic value of an empathetic consultation is by no means a new insight, yet it is knowledge that is in danger of being forgotten. Modern mainstream medicine frequently seems to neglect the importance of medical core values such as empathy, sympathy, time, understanding and holism. This creates a situation where alternative practitioners tend to provide the non-specific and mainstream doctors the specific effects. Clearly, this is wrong and may well be one reason why patients consult alternative medicine practitioners. I would argue that any good medicine must offer both, and we should be skeptical of those clinicians who opt for providing only one or the other.”

    (Ernst, E. (2010). “Homeopathy, non-specific effects and good medicine”. Rheumatology 50 (6): 1007–1008.)

    Taking all this together, it’s easy to imagine that a patient who is hoodwinked into believing they are receiving a mystical personalized treatment will have placebo benefit. Of course, we all know that a medical doctor can not pay the bills if they see 8 patients a day. But as doctors, we can aim to provide the best possible doctor-patient interaction.

  3. cervantes says:

    All that said, one of the most important challenges facing medicine right now is to get a better understanding of heterogeneity of treatment effects (HTE) and to develop experimental and analytic methods that do a better job of sorting out who is most likely to benefit from what treatment. Statistically significant differences in outcomes between two study arms typically — not sometimes or as an interesting anomaly, but typically — conceal large differences in effect within each arm. Some people in the arm that is found superior are often, in fact, harmed; while some in the less efficacious arm in fact benefit. This is not unusual at all, it’s the truth about RCTs that is often overlooked.

    Quoting from the post, “Certainly, organisms such as humans can and do show considerable variability in their biology and response to treatment, but rarely so much that what “cures” one person will have no effect on the next. ” I have to differ — this is not rare at all. Much of medical practice — including oncology, as Dr. G ought well to know — is essentially empirical. Physicians may be uncertain of the diagnosis, so they try something and end up ruling out because the symptoms are unresponsive. Some people respond to anti-depressants, but most do not. (True fact, though little mentioned.) “Cancer” of course is not one disease but innumerable different genetic anomalies and the responsiveness of a cancer to a treatment is highly variable. I could go on and on but this is not the place for it.

    One of the most important missions of the new Patient Centered Outcomes Research Institute is to work on sorting all this out – discovering how to provide the necessary evidence and ways of communicating it that will enable us to do a better job of determining what is right for the individual patient. The woomeisters are full of crap, but this is an actual, true, real, genuine challenge for science-based medicine.

  4. DW says:

    “the placebo effect in alt-med is largely due to having someone just spend a lot of time with the patient”

    So there’s one theory: the difference is “time.” But is this true? Is there any evidence alternative practitioners spend more time with patients? If there is evidence that additional time spent with patients increases or potentiates the placebo effect, how much time are we talking about?

    “the much more logical conclusion is what Brien et al. have now demonstrated experimentally: patients benefit from a long and empathic encounter with a homeopath but not from the remedy.”

    So there’s a second suggestion: “empathy” makes the difference. Again, is there evidence that alternative practitioners are actually more empathetic? Has someone studied this? Using what sample size, and what measures of empathy?

    One possibility is that time and empathy are really the same thing, that is, the patient perceives he/she has been treated empathetically just because the doctor spent a long time on the consultation.

    I have no credentials to critique those studies but seriously, I doubt the “time and empathy” hypothesis is even true. I do not think we are talking about time and empathy. I think we are talking about advertising.

  5. David Gorski says:

    “Certainly, organisms such as humans can and do show considerable variability in their biology and response to treatment, but rarely so much that what “cures” one person will have no effect on the next. ” I have to differ — this is not rare at all.


    “Cancer” of course is not one disease but innumerable different genetic anomalies and the responsiveness of a cancer to a treatment is highly variable. I could go on and on but this is not the place for it.

    Well, yes and no. In cancer, especially, what we are looking at are degrees of response. It’s usually not cure versus no cure. I have, of course, written extensively about genetic variability in cancer right on this very blog, as well as the issues of lead time bias, cancer evolution, and various things that can mean the difference between a treatment being efficacious or not. In any case, the passage I quoted was originally written five years ago and then updated for SBM a year and a half ago; so perhaps these days I’d change the word “rare” to “relatively uncommon.” Be that as it may, “personalized medicine” and “individualization” of treatments have turned out to be devilishly difficult and complicated. Although, as I said in some of my posts about Burzynski’s co-opting of the concept, I actually do believe that one day we’ll have a decent form of “personalized medicine” based on genomics and biology, I also believe that “personalized medicine” right now all too often overhyped in the absence of much evidence that it does much better than old-fashioned EBM. In any case, human biology is very complicated, and such predictors are probably decades off. Perhaps that’s why the co-opting of “personalized medicine” by woo-meisters bothers me so much. The last thing I want is for a promising area that in the long term has great potential to improve human health to become tainted with the reputation of quackery.

  6. ConspicuousCarl says:

    Hickey and Roberts said:
    To explain this, suppose we measured the foot size of every person in New York and calculated the mean value (total foot size/number of people). Using this information, the government proposes to give everyone a pair of average-sized shoes. Clearly, this would be unwise-

    This analogy actually shows exactly why they are wrong and why medical trials work.

    Except for insanely expensive tailoring, nobody measures a customer’s foot and then creates a custom shoe just for them. The seller measures the customer’s foot, and then gets one of only a handful of different sizes available. Far from being so personalized as to evade scientific study, each given shoe size is actually sold to millions of different people.

    And what sort of scientific study might we want to do, if the concept of a shoe had just been invented? Well, we might want to do Phase I trials to find out if shoes are safe to wear, and how big of a shoe can be worn safely. Is the wearer going to experience discomfort or injury if the shoes are too big? How big can the shoe get before the wearer risks twisted ankles and tripping? Does the shoe remain on the foot all day without falling off?

    Then we might want to do Phase II trials to find out how much benefit the wearer gains from wearing a shoe within the safe range of shoe size for their foot. If shoes in any size are harmless, but still beneficial, we can sell a nice big shoe which fits everyone. If having oversized shoes produces a risk which outweighs the benefit of going barefoot (and in fact, this is the case in real life), then we would have to sell shoes in multiple sizes so that a person can get a shoe within the safe size range for their foot, just as drugs with potential side effects and overdosing are available in different amounts. The foot’s tolerance for slightly imperfect shoe size will determine how many different shoe doses we have to manufacture for a Phase III trial and mass marketing.

    And then we can argue about whether or not commercial shoe production has produced enough size variety for everyone, and weigh the ups and downs of possibly having the government mandate more varied shoe sizes.

  7. BobbyG says:

    You never disappoint.

    Cited this on my REC blog just now.

  8. David Gorski says:

    Dammit, Carl. You did a much better job of deconstructing that analogy than I did. I might have to steal that. :-)

  9. kathy says:

    Dave Gorski wrote: “The last thing I want is for a promising area that in the long term has great potential to improve human health to become tainted with the reputation of quackery.”

    You said it. To quote an old advert for tyres, “This is where the rubber meets the road”. A negative perception is almost impossible to get rid of once it has taken hold of the public mind.

    Imho, that’s why many people are turning to woo … because they have been fed from childhood on anti-EBM propaganda. And like people that are used to junk food, they really don’t want to change their diet.

    Besides, woo tastes good so it must be good, not so? Taste = “you are unique”, “you are a rebel at heart”, “you are not taken in by Big Pharm like the rest of these sheep” and “your friends and relations will be ever so impressed”. These taste good, sure … so naturally they must be healthy and nourishing … sure?

  10. BKsea says:

    “they make a great show of pointing out that ‘statistically significant’ doesn’t necessarily mean ‘significant.’”

    But the problem is that they try to use this correct conept to essentially argue that “statistically insignificant” doesn’t necessarily mean “insignificant.” There they are on much shakier ground!

  11. Jan Willem Nienhuys says:

    It is indeed quite possible to come up with designs that take into account patient individualization. True, it’s more difficult, but it’s by no means impossible.

    Especially in homeopathy it’s very simple. In ordinary medicine it is quite tricky tot devise a credible placebo, but a ‘placebo homeopathic remedy’ that has to be compared to some highly diluted ‘real’ homeopathic remedy is the easiest thing to make. And in fact quite a few such RCTs have been performed, mostly with results that were disastrous for homeopathy.

    Medical research with sick people is usually quite expensive, but here homeopathy has another advantage. Homeopathic individualized remedies are based upon the premise of the existence of so-called drug pictures. There are roughly 1000 homeopathic remedies (not counting different degrees of high dilution), each with roughly 1000 ‘symptoms’, namely subjective phenomena experienced by people who have taken highly diluted substances. (Forget about coffee making you sleepless, so diluted coffee for sleeplessness.)These symptoms have been obtained by so-called provings. None of those remedy-symptom combinations that fill the homeopathic Materia Medica have been properly reproved.

    Individual homeopaths will challenge their opponents by asking them to take (for example) a few doses of Sulphur 200C and experience unbearable itch. But whenever they are asked to help organise a decent test with proper randomizing and blinding and a fair number of participants, a deafening silence ensues, or the homeopaths will say that they don’t have to prove themselves or that they are too busy with treating patients or that they suspect statistical trickery. This is strange, because a reproducible reproving in any form would bring the homeopaths fame and earn them many ‘skeptical’ rewards, i.e. over a million dollars for starters.

    Summarizing: individualized homopathy can be tested quite easily and that has been done. The roughly one million remedy-symptom combinations that form the basis of the individualized homeopathic treatment can easily be tested, but homeopaths all over the world shun doing that.

  12. Scott says:

    In my mind, one of the more effective demonstrations that CAM isn’t really “individualized” is the existence of OTC homeopathic remedies. Supposedly the remedy (or remedies) needs to be carefully matched to the symptoms, but then they tell everyone with a cold to use Zicam.

  13. Zetetic says:

    BUT – Remember – Zicam actually has something detectable in it!

  14. Jan Willem Nienhuys says:

    the existence of OTC homeopathic remedies

    I don’t know about the USA, but the enduring popularity of homeopathy is the result of the efforts of Big Homeo, who not only market remedies like Natrum Muriaticum 30C, but also various nostrums with suggestive names containing mixtures of highly diluted remedies (something the founder of homeopathy would have disapprobed of) and also herbal preparations like Arnica and Echinacea that contain large amounts of plant extracts (if an extract is prepared in the manner of a homeopathic mother tincture, the stuff technically counts as homeopathy). So homeopathy became in the popular perception synonymous with ‘herbal medicine’, mainly through the efforts of Big Homeo. Big Homeo not only spends a lot on advertising, but also puts a lot of effort in lobbying in parliaments organising patients to write letters or sign petitions, and so on.

    Ordinarily, a homeopathic healer can do with a very small stock of remedies, because typically treatment would start with having the patient take one single 5 mg globule, and then wait a month to study the reactions. That is no big business for companies that want to sell in bulk.

    So Big Homeo makes homeopathy popular, and the physicians that do the ‘real’ homeopathy give it status. Itr’s like astrology which is consumed in large amnounts in the form of newspaper horoscopes. These newspaper horoscopes have revived astrology from a moribund kind of esoteric advice to a thriving business, relying on the status of ‘real’ astrologers who take a lot of time to interpret highly individualized horoscopes in 1 one 1 consultations.

  15. Werdna says:

    People like this are idiots…even this simple statement:

    “Using this information, the government proposes to give everyone a pair of average-sized shoes. Clearly, this would be unwise-the shoes would be either too big or too small for most people.”

    Even if we assume that the analogy is accurate (which of course it isn’t medicine does not have to be as fitted as a shoe) it leaves out a few rather obvious bits of math. A remedy isn’t looking at treating everyone, just everyone who is sick. So the scale is way off. On top of that the people have clearly never heard of measuring variance. If the standard deviation of the mean was 0.0001 then the mean shoe size would likely fit the vast majority of people (assuming the distribution is relatively normal).

  16. Werdna says:

    Sorry for the double post but I also never understand how these people get around the obvious chicken-egg problem individualized medicine. How do you individualize treatment for someone you’ve never met before? Right! you would rely on some generalized knowledge of the mechanisms of the body that has some variance across it’s functions (i.e. some people respond to drug A but others to drug B) and then use other information to narrow down the possibilities.

    The problem here is that this assumes that you have an accurate model of the mechanisms of the body. Yet, when we examine various kinds of CAM that’s exactly what they don’t have. We don’t have Qi running through our bodies, we don’t have subluxations, etc…

  17. Xplodyncow says:

    Hmm … it looks like Hickey and Roberts are missing a footnote:

    It is time for medical practitioners to discard EBM’s tarnished gold standard, reclaim their clinical autonomy, and provide individualized treatments to patients.*

    *All recommendations are category 3 unless otherwise noted.

  18. BobbyG says:

    “How do you individualize treatment for someone you’ve never met before?”

    Read “Medicine in Denial,” Lawrence and Lincoln Weed, ISBN 1456417061


    “…Closing the gap between medical practice and patient needs would trans- form how medicine is personally experienced by practitioners and patients alike. Practitioners could find their work to be less exhausting and more rewarding, emotionally and intellectually, than what they now undergo. The physician’s role could disaggregate into multiple roles, all freed from the impossible burdens of performance that physicians are now expected to bear. The expertise of nurses and other non-physician practitioners could deepen, and their roles could be elevated. All practitioners could follow time-honored standards of care that in the past have been honored more in the breach than the observance. All practitioners and patients could jointly use electronic information tools for matching data with medical knowledge, radically expanding their capacity to cope with complexity. All could use structured medical records, whose structure would itself bring order and transparency to the complex processes of care. Inputs by practitioners could thus be defined and subjected to constant feedback and improvement. A truly evidence-based medicine could develop, where evidence would be used to individualize care rather than standardize it. And a system of checks and balances could develop, where patients and practitioners would act on incentives for quality and economy far more effectively than before.” [pp 4-5]

    Not that I buy the whole thesis uncritically. It is, though, a detailed, documented read.

  19. JohnW says:

    I found it interesting that the following article showed up at MedlinePlus on 2 Jan 12:

    I like the following quote:

    “We found that there are some viable treatment options for neck pain,” said Gert Bronfort, vice president of research at the Wolfe-Harris Center for Clinical Studies at Northwestern Health Sciences University in Bloomington, Minn.

    “What we don’t really know yet is how to individualize these treatments for each particular patient. All are probably still viable treatment options, but what we don’t know is what each particular patient will need,” Bronfort said, adding that it’s possible a combination of treatments might be helpful, too.

    Of course the Northwestern Health Sciences University appears to be a chiropractor/acupuncture/massage and “Teaches and promotes natural approaches to health and health care.”

  20. Conspicuous Carl “Well, we might want to do Phase I trials to find out if shoes are safe to wear, and how big of a shoe can be worn safely. Is the wearer going to experience discomfort or injury if the shoes are too big? How big can the shoe get before the wearer risks twisted ankles and tripping? Does the shoe remain on the foot all day without falling off?”

    Apparently, everyone else here can easily read this analogy and not once envision a hoard of people flopping around in huge shoes.

    Damn EBM and this clown shoe epidemic!

  21. Harriet Hall says:


    I found that study interesting because although patients were “more satisfied” with manipulation, its early results were no better than home exercise, and the late results favored home exercise: at one year, 37% of the exercise group and only 27% of the manipulation group had experienced 100% pain relief. The real lessons are that neck pain is likely to go away eventually on its own, and that hands-on, TLC treatment and attention are preferred by patients but are less effective in the long term. And the authors point out that neck manipulation can be dangerous.

  22. Werdna says:

    @BobbyG. I actually don’t think I understand what you are saying. I’m saying that in order to “individualize” medicine. You need something to individualize – the only thing you can individualize is something that actually has variance of effect. For example something that never helps anyone can not be individualized any more than something that helps everyone all the time.

    How do you validate if some treatment has variance of effect?…by doing large trials. The very thing CAM people say aren’t necessary.

  23. BobbyG says:

    @Werdna -

    Read the book. I’m not talking about SCAM.

  24. ConspicuousCarl says:

    micheleinmichigan on 03 Jan 2012 at 9:58 pm

    Apparently, everyone else here can easily read this analogy and not once envision a hoard of people flopping around in huge shoes.

    And now, imagine them playing tennis. :)

    David Gorski on 02 Jan 2012 at 10:09 pm
    I might have to steal that.

    Steal away, but I think I worded the third sentence in the larger paragraph backwards. I meant to express concern that the risk of oversized shoes might outweight the RISK of going barefoot, not the benefit of going barefoot.

  25. Werdna says:

    @BobbyG – or instead you can give me the argument assuming you understand it. If you don’t – that’s cool but perhaps stop promoting it otherwise lay it on me.

    I saw the snippet or two on your site. It doesn’t say much even the quote you gave above barely makes a useful statement.

  26. @ ConspicuousCarl, hehe, even better.

  27. BobbyG says:


    It’s a long book. I’ve only begun citing parts of it.

    “How do you individualize treatment for someone you’ve never met before?”

    I never claimed to to be simple. Nor is it instant. You have to start somewhere, though. Moreover, I don’t buy everything in the book uncritically, though I find their argument for truly “individualized” care pretty intriguing.

    “First, from the outset of care, relevant patient data must be chosen, and its implications determined, based on the best available medical knowledge, independent of the limited personal knowledge of the practitioners involved. Patient data must be systematically linked to medical knowledge in a combinatorial manner, before the exercise of clinical judgment, using information tools to elicit all possibilities relevant to the problem situation, while defining and documenting the information taken into account. Practitioners’ clinical judgments may add to, but must not subtract from, high standards of accuracy, completeness and objectivity for that information.” [pg x]

    “Without the necessary standards and tools, the matching process is fatally compromised. Physicians resort to a shortcut process of highly educated guesswork. They begin with guesses about diagnostic possibilities that might account for the chest pain. Sometimes very sophisticated, these initial guesses lead to further guesswork about what to check during the initial history, physical examination and laboratory tests for investigating whatever diagnostic possibilities come to mind. And then physicians make more guesses about what the data mean, which in turn shapes their judgments about what further data to collect. Varying from one physician to another, these highly educated guesses are not explicit—physicians do not carefully record their thinking or the information they take into account. Inputs to decision making are thus undefined.

    We use the term “guesses” because these key initial judgments are made on the fly, during the patient encounter, based on whatever enters the physician’s mind at the time. That mind may be highly informed and intelligent, but inevitably its judgments reflect limited personal knowledge and experience, and limited time for thought. Euphemistically termed “clinical judgment,” physician thought processes cause a fatal voltage drop in transmitting complex knowledge and applying it to patient data. The outcome is that the entire health care enter- prise lacks a secure foundation.

    Equally insecure are the complex processes built on that foundation: de- cision making, execution, feedback and corrective action over time. Responsibility for all these processes falls on the mind of the physician. Here again the mind lacks external tools and accounting standards for managing clinical information.” [pp 2-3]

    “In maintaining health, in chronic disease, and in the events that lead to acute illness, the patients themselves know and control more of the relevant variables than anyone else. Patients live with the variables all the time. When the values of those variables change (when the situation changes), they can be the first to know.

    Physicians often know only a few of the variables and usually have direct control over none. Physicians and other medical personnel see a fragment of the total during a fragment of the time.” [pg 253]

    ” Right! you would rely on some generalized knowledge of the mechanisms of the body that has some variance across it’s [sic] functions (i.e. some people respond to drug A but others to drug B) and then use other information to narrow down the possibilities.”

    Well, yeah, precisely.

    Sorry if I wasn’t clear.

    “give me the argument assuming you understand it”

    I’ll pass on that.

  28. Lytrigian says:

    Hickey and Roberts are wrong yet again.

    “Population statistics do not capture the information needed to provide a well-fitting pair of shoes, let alone to treat a complex and particular patient. As the ancient philosopher Epicurus explained, you need to consider all the data.”

    Of course population statistics capture the information needed to provide a well-fitting pair of shoes. How to they think shoe manufacturers know what size increments to make shoes in, and how much of each?

    @Jan Willem Nienhuys: I have suspected that the non-homeopathic homeopathic remedies are prepared as they are, with homeopathic mother tincture and then potentized to 1X or some other low dilution, to avoid the necessity for the Quack Miranda Warning and to allow the sellers to make positive health claims for their products. A manufacturer of herbal remedies cannot make any claims for efficacy or suitability if the FDA doesn’t permit it for those herbs, but call something “homeopathic” and, thanks to the loopholes written into US law specifically to accommodate homeopathy, they can claim whatever the HPUS lists for the materials they include. Unless I misunderstand the law, all that’s required is that the remedy be prepared according to homeopathic techniques.

  29. Jan Willem Nienhuys says:

    Lytrigian wrote ( )

    I have suspected that the non-homeopathic homeopathic remedies are prepared as they are, with homeopathic mother tincture and then potentized to 1X

    In Europe it’s a bit more complicated. EU directives regarding homeopathy (hm) distinguish two types of hm products: the highly diluted ones and the low dilutions. The high dilutions don’t need efficacy proofs, only proof of safety and guarantees that the manufacturer prepares as he claims. No indications were allowed for high dilution hm products. For the low dilutions (with indication) proofs of efficacy are required. In itself this is not a bad idea: if someone wants to buy Mercurius Solubilis 30C, then let him (or her), as long as it does no harm and as long as the packaging of the medicine only says ‘Mercurius Solubilis 30C’ without any claims about what it does.

    Admittedly this is a bit silly, comparable to allowing the sale of holy water, provided that a state committee has checked that it is sterile and that the priest who did the blessing is properly ordained.

    Now EU directives are not laws. They are instructions to the member states to make their own laws in accordance with the directives.
    But these directives had loopholes and backdoors. In the Netherlands the ‘traditional specialties’ loophole was used to permit hm products with low dilutions (one product is for example an Arnica ointment for bruises containing 30% mother tincture). The loophole was explicitly used as a favor to the hm Big Pharma lobby. So the Dutch committee for judging medicines also approved low dilution products with indication, provided the producers printed a disclaimer on the label (saying that there was no scientific proof for the efficacy). Initially the hm producers complied but then they complained in court, saying that the law didn’t say anything about this disclaimer for low dilution products. The court accepted this. Next the state abolished the whole special treatment on basis of the loophole and gave the hm producers until January 2008 to hand in proper proofs of efficacy for all their low dilution products. Of course none of them did, and then one product was explicitly forbidden, as a test case. The producer immediately objected and started juridical proceedings which they can drag on all the way to the European courts. (These hm producers spend exactly nil on decent efficacy research, but they shell out millions for legal procedures, also against critics.) Meanwhile all their products remain in the shops, including the famous Oscilloccinum that got a permit to mention the indication ‘flu’, probably because the committee was hoodwinked by the fraudulent research by Ferely et al. and later by Papp et al.

    This is roughly the Dutch situation. In other EU countries the situation is different: everywhere ‘highly individualized’ loopholes and backdoors are used. In addition there are new EU-directives regarding hm products. If there is traditional hm literature mentioning diseases and symptoms for some hm product, then it is allowed to sell them with these indications. See directive 2004/27/EC , which requires that every EU member state establishes a committee consisting of scientists and quacks to determine what exactly the traditional hm literature says.

Comments are closed.