I remain curious as to why people use, and continue to use, useless pseudo-medicines. I read the literature, but I find the papers unsatisfactory. They seem incomplete, and I suspect there are as many reasons people choose a pseudo-medicine as those use them.
There are numerous surveys on what SCAMs people use. Designing and offering these surveys to every possible medical condition is a growth industry: the old, the young, cancer patients, AIDS patients. All need be asked which SCAM they use. It seems to be a ready way to get a quick entry in your CV, but which SCAM is used does not speak to the why a particular SCAM is being used. Why try acupunctures, say, instead of reflexology?
There are numerous reasons suggested for why people partake of SCAMS as a general concept: dissatisfaction with standard medical care is a common one but is not always supported in the literature. Gullibility, ignorance, and stupidity are often credited, none of them are particularly valid. Dr. Novella covered the topic in 2012. There is some data to suggest that which SCAM and why is a moving target, changing over time.
The problem with surveys is that the answer given depends in part on how the question is phrased. And as I get older the more I suspect that free will and conscious, rational thought are an illusion at worst and rare phenomena at best. Sure, I can make trivial choices, deciding to hit a 7 or 6 iron on number 5 at Rose City, depending on wind and tee location. That is, I hope, a rational decision based on current conditions. And I can still snap hook the ball into water that should never seriously come into play.
But so much of what I think burbles up from some subconscious mental process. After I come to a decision or have an opinion, I devise an after-the-fact conscious justification for my position. I hear myself making up the reasons on the fly, half-wondering who made the decision in the first place.
This is most acutely true in ID and happens more and more as my career progresses. As I hear a case presentation from the resident the probable diagnosis pops into my head. Since I have to teach the resident why I think it is a particular diagnosis, I explain my reasoning, but I am often aware that it is all after-the-fact hand waving. I do not really know why I thought it was a liver abscess that is causing the fever.
Just where these ideas come from is unknown, and a little creepy since it is not the conscious me who is doing the work. It is part of the reason why, along with free will, I suspect consciousness is a minor and unimportant aspect of the human condition. So much seems to go on in my brain over which I have no control.
Most of the brain’s work is done while the brain’s owner is ostensibly thinking about something else, so sometimes you have to deliberately find something else to think and talk about.
~ Neal Stephenson, Cryptonomicon
Declared motivations for behavior are just one big post hoc ergo propter hoc fallacy. So when people offer explanations for why they participate in a particular SCAM, I am skeptical. I suspect it is all an after-the-fact rationalization. People’s motivations are black boxes and I suspect most do not have a good understanding as to why they use SCAM or do anything else.
Still, it doesn’t stop one from making broad generalizations. I find it interesting how various biases lead to erroneous conclusions about the way the world works. As I have mentioned before, our brains have evolved to survive reality, not to understand it.
In part what separates those who subscribe to the notion of science-based medicine and those who practice pseudo-medicine are explicit criteria for accepting evidence of therapeutic efficacy combined with an understanding of all the logical fallacies to which we are prone.
The most compelling evidence are randomized, placebo controlled, double-blind studies. The least compelling is personal experience and testimonial. I do not consider making stuff up as evidence, a standard not always shared. For most people the order is reversed, and so often it is the story of the friend’s cousin who had their disease cured with some pseudo-medicine, and what do you have to say to THAT, Mr. Smarty Pants Skeptic?
I take Feynmans quote to heart:
The first principle is that you must not fool yourself – and you are the easiest person to fool.
Those who practice pseudo-medicine have to ignore it. And there are many ways in which we can be fooled in to continuing a pseudo-medicine after, for whatever reason, we have decided to give it a try.
It has been suggested that illusions of causality (are) at the heart of pseudoscience including pseudo-medical interventions. After reviewing
how superstitious beliefs of all types are still happily alive and promoted in our Western societies
the authors state, or understate,
it is not easy to counteract the power and credibility of pseudoscience.
One cognitive bias is that with people will credit control of events even though they are not responsible for the event. The classic examples are gamblers and athletes with their useless rituals. Interestingly, the more a person is involved in an activity, the stronger their illusion of control. Health and illness tend to provide an opportunity for deep involvement. Few issues are as important as personal health and offer more risk for being fooled.
…people (and, arguably, other animals as well) trying to obtain a desired outcome that occurs independently of their behavior tend to believe that they are causing the outcome.
And people often credit causality of two events even when there is no credible connection between the events. We love to find causality where none exists.
Medical conditions are particularly prone to cognitive errors. Diseases and their symptoms wax and wane spontaneously and more severe illness tends to lessen overtime (regression to the mean). People usually seek care when symptoms are at their worst and so will get better no matter the intervention, effective or not. However, the bias will be to credit the intervention for the resolution.
I see this all the time in my practice. A patient with no good diagnosis for a treatable infection gets better on antibiotics. Is it that they got better and were on antibiotics (true-true and unrelated is the medical shorthand) or they got better because they were on antibiotics. I tend towards the former but it is very hard to convince doctors and patients that the latter is not true. I wonder how much inappropriate antibiotic prescribing is due to illusions of causality rather than pleasing the patient and time constraints, which are the usual explanations.
The hypothesis, ‘the illusion of efficacy of an intervention is an important factor regarding why people believe worthless therapies are effective,’ was studied. In a computer simulation:
Participants were asked to imagine being a medical doctor who was using a new medicine, Batatrim (i.e., target cause), which might cure painful crises produced by a fictitious disease called Lindsay Syndrome. Then, participants were exposed to the records of 100 fictitious patients suffering from these crises, one per trial. In each trial, participants saw three panels. In the upper one, participants were told whether the patient had taken the medicine (cause present or absent). In the second panel, participants were asked whether they believed that the patient would feel better. Responses to this question were given by clicking on one of two buttons, ‘Yes’ or ‘No’. The purpose of this question was to keep participants’ attention. The third and lower panel of each trial appeared immediately after participants gave their response. It showed whether the fictitious patient was feeling better (i.e., effect present or absent). In Group High p(C), 80 out of the 100 patients had followed the treatment and 20 had not. In Group Low p(C), 20 patients had followed the treatment and 80 had not. In both cases, 80% of the patients who took the medicine, and 80% of those who did not, reported feeling better.
So no matter the intervention, patients improved 80% of the time.
There were then asked
To what extent do you think that Batatrim is the cause of the healings from the crises in the patients you have seen?’ (Causal question), and ‘To what extent do you think that Batatrim has been effective in healing the crises of the patients you have seen?’ (Effectiveness question)
Those who were in the Group High were more likely to judge the Batatrim effective, which is probably how people assess therapies in real life, even though the results were random.
Interestingly, they were less likely to rate Batatrim as causing the improvement, even in the Group High. Thinking in terms of causality is not how people assess therapies in real life.
So they suggest that thinking in causal rather than in effectiveness terms, may help decrease the illusion of efficacy. Understanding potential causality, prior plausibility, is difficult. Most people have no reason to understand the biomedical sciences in the depth that would allow them to understand why homeopathy or acupuncture is a crock. So when they take useless nostrums and improve they probably think in terms of efficacy of the treatment rather than causality and credit the useless nostrum.
Not really a surprise as a result, except for the potential beneficial effect of getting people to think causally about an intervention. When a person knows they improved after, say, acupuncture, they always look at you (me) like you are dense when you suggest they did not get better from the acupuncture. Of course they got better from the intervention. Duh.
Quite possibly, the effectiveness question is the one that is most frequently used by lay people when inquiring about the efficacy of medical treatments, and pseudoscientific claims in general, but it should be noted that it is probably a misleading question. Stating that a treatment is not effective when all the people who we know that have followed it feel better makes no much sense. However, if we were asked whether the treatment is the real cause of the recovery of those patients, we would have to look for alternative potential causes (even hidden causes). This process forces participants to consider all the evidence available, something that they do not always do and that other questions may not require.
This study, even as a simulation, helps add to the understanding not only as to why people continue to use a given useless therapy, but perhaps one way to counter their use, although the authors do note that it is easier said than done:
Although a good knowledge of scientific methods is always desirable, one problem of such a strategy is that it requires, first, to convince people that science is something they should pursue (something quite difficult in pseudoscience circles of influence), and second, perhaps even more difficult, to convince people to use control conditions, to reduce the frequency with which they attempt to obtain the desired event, so that they can learn that it occurs equally often when they do nothing.
But wait. There’s more. The use of pseudo-medicines is more nuanced than illusion of causality.
I expect therapies to do something, not only the primary effect, but to have side effects. It turns out that doing nothing is an important factor in people thinking a useless therapy is effective.
In PLOS One was the recent article “The Lack of Side Effects of an Ineffective Treatment Facilitates the Development of a Belief in Its Effectiveness.”
While most would agree that people frequently resort to those treatments they believe are more effective, we propose that the reverse also holds: frequent use of a treatment, because of the lack of side effects or other considerations, fuels the belief that it is effective, even when it is not.
They use the ultimate in nothing, homeopathy, as an example, noting that use of the nostrum results in the causal illusion, which is increased when the process being treated has a high spontaneous resolution rate.
Basic research suggests that the more often a patient takes a completely useless medicine, the more likely she will develop a belief in its effectiveness. This is particularly true when the desired outcome (the healing) takes place frequently.
No one had looked at the consequences of medication side effects have in determining the belief that a useless therapy has efficacy. Their
prediction is that, because a lack of side effects encourages the use of the treatment with high probability, it facilitates the illusory belief that the treatment is working.
The used a computer simulation, a variation of the prior study. Students were asked to again treat a dangerous disease called “Lindsay Syndrome” with a drug called Batatrim. I am starting to think we should call all useless drugs Batatrim, but it would be too obscure a meme. Participants were divided into two groups:
The high-cost group was informed that Batatrim would produce a severe and permanent skin rash as a side effect in every patient who takes it.
and a low cost group whose patients had no complications from the Batatrim. Subjects were then shown the records of 50 consecutive patients with the disease and asked as to whether they would treat.
Whether or not the patient got better was determined randomly, but 70% of the time the fictitious patient improved.
After the treatment was decided and the computer randomly assigned the results, the subjects had feedback:
In the no-cost group, the outcome was displayed as a picture of a healthy face and the message, “The patient has recovered from the crisis”, whereas the outcome absence was displayed as a picture of an ill face (greenish, covered in sweat) identical to the one presented in the top panel of the computer screen, and the statement, “The patient has not recovered from the crisis.” … By contrast, the high-cost group was shown pictures and messages conveying not only the disease outcome, but also the side effects of Batatrim when it was used. Thus, whenever the medicine was given, the picture of the patient showed a skin rash, and the statement also included the words “…and has severe side effects.” Likewise, whenever the medicine was not given, the words “…and has no side effects” were added to the message.
At the end the subjects were asked to rate the perceived effectiveness of Batatrim. When there were no side effects reported subjects were much more likely to give the medication AND were much more likely to rate the drug as effective.
In this study, we have shown that knowing a medicine produces side effects prevented the overestimation of its effectiveness that is typically observed when the percentage of spontaneous remissions is high. We demonstrated that the mechanism by which this effect works rests on the lower frequency of the treatment usage exhibited by those participants who were aware of the medicine’s side effects.
To my mind, having no side effects is equal to saying that the therapy has no effects. But in the world of pseudo-medicines, having no effects and no side effects work together to fool the patient, by way of the causal illusion, that there is efficacy.
Having no side effects promotes use of the medication and for a process that has a high spontaneous resolution rate, promotes reinforcement of the causal illusion. Since the therapy is harmless, you use it more often and see it ‘work’ more often.
The solution? Perhaps it is actually education and awareness of how to think, at least in adolescents. For many adults it is probably too late.
We found that training a group of adolescents in the rational manner of making inferences about cause-outcome relationships decreased their illusory perceptions of causality in a subsequent non-contingent situation. Moreover, including a control condition in the positive contingency scenario allowed us to conclude that the lower causal ratings observed in the experimental group could not be solely explained by a general increase in suspicion in this group. Rather, the group specifically made more realistic judgments in the null contingency condition while preserving an accurate view of the positive contingency condition.
Whether this kind of educational interaction will result in long-term changes after the initial interaction is unknown and I would be skeptical. I suspect that rational thought is not the default cognitive mode. This is certainly the case for me. I always have to will myself to think rationally about topics. The causal illusion is powerful especially when combined with all the other cognitive biases simultaneously in action and the apparently natural resistance to changing one’s mind even as realty changes around you.
A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day.
Still, it gives one hope, and that despite my lobbying to make Sysiphus the logo for SfSBM, the concept at the heart of the science-based medicine blog, accurate information, is fundamentally the correct one.