In my recent review of Peter Palmieri’s book Suffer the Children I said I would later try to cover some of the many other important issues he brings up. One of the themes in the book is the process of critical thinking and the various cognitive traps doctors fall into. I will address some of them here. This is not meant to be systematic or comprehensive, but rather a miscellany of things to think about. Some of these overlap.
Everything is attributed to a pet diagnosis. Palmieri gives the example of a colleague of his who thinks everything from septic shock to behavior disorders are due to low levels of HDL, which he treats with high doses of niacin. There is a tendency to widen the criteria so that any collection of symptoms can be seen as evidence of the condition. If the hole is big enough, pegs of any shape will fit through. Some doctors attribute everything to food allergies, depression, environmental sensitivities, hormone imbalances, and other favorite diagnoses. CAM is notorious for claiming to have found the one true cause of all disease (subluxations, an imbalance of qi, etc.).
When Dr. Novella recently wrote about plausibility in science-based medicine, one of our most assiduous commenters, Daedalus2u, added a very important point. The data are always right, but the explanations may be wrong. The idea of treating ulcers with antibiotics was not incompatible with any of the data about ulcers; it was only incompatible with the idea that ulcers were caused by too much acid. Even scientists tend to think on the level of the explanations rather than on the level of the data that led to those explanations.
A valuable new book elaborates on this concept: Diagnosis, Therapy and Evidence: Conundrums in Modern American Medicine, by medical historian Gerald N. Grob and sociologist Allan V. Horwitz. They point out that
many claims about the causes of disease, therapeutic practices, and even diagnoses are shaped by beliefs that are unscientific, unproven, or completely wrong. (more…)
Karl Popper said “Science must begin with myths and with the criticism of myths.” Popular psychology is a prolific source of myths. It has produced widely held beliefs that “everyone knows are true” but that are contradicted by psychological research. A new book does an excellent job of mythbusting: 50 Great Myths of Popular Psychology: Shattering Widespread Misconceptions about Human Behavior by Scott O. Lilienfeld, Steven Jay Lynn, John Ruscio, and the late, great skeptic Barry L. Beyerstein.
I read a lot of psychology and skeptical literature, and I thought I knew a lot about false beliefs in psychology, but I wasn’t as savvy as I thought. Some of these myths I knew were myths, and the book reinforced my convictions with new evidence that I hadn’t seen; some I had questioned and I was glad to see my skepticism vindicated; but some myths I had swallowed whole and the book’s carefully presented evidence made me change my mind. (more…)
I recently wrote an article for a community newspaper attempting to explain to scientifically naive readers why testimonial “evidence” is unreliable; unfortunately, they decided not to print it. I considered using it here, but I thought it was too elementary for this audience. I have changed my mind and I am offering it below (with apologies to the majority of our readers), because it seems a few of our readers still don’t “get” why we have to use rigorous science to evaluate claims. People can be fooled, folks. All people. That includes me and it includes you. Richard Feynman said
The first principle is that you must not fool yourself–and you are the easiest person to fool.
Science is the only way to correct for our errors of perception and of attribution. It is the only way to make sure we are not fooling ourselves. Either Science-Based Medicine has not done a good job of explaining these vital facts, or some of our readers are unable or unwilling to understand our explanations.
Our commenters still frequently offer testimonials about how some CAM method “really worked for me.” They fail to understand that they have no basis for claiming that it “worked.” All they can really claim is that they observed an improvement following the treatment. That could indicate a real effect or it could indicate an inaccurate observation or it could indicate a post hoc ergo propter hoc error, a false assumption that temporal correlation meant causation. Such observations are only a starting point: we need to do science to find out what the observations mean. (more…)