What does it take to become a doctor? Endurance and perseverance help. It is a long haul from college to practice. But the skill that is most beneficial is the ability to consume prodigious amounts of information, remember it, and recall it as needed. Although I often relied on ‘B’ to get me through some of the exams.
Thinking, specifically critical thinking, is not high on the list of abilities that are needed to become or be a doctor. Day to day, doctors need to think clinically, not critically. Clinical thinking consists of synthesizing the history, the physical and the diagnostic studies and deciding upon a diagnosis and a treatment plan. It is not as simple as you might think. When medical students start their clinical rotations and you read their notes, you realize they have what amounts to an advanced degree at Google U. They know a huge amount of information, but have no idea how the information interrelates and how to apply the that information to a specific clinical scenario. With time and experience, and it takes at least a decade, students become clinicians and master how think clinically, but rarely the need to think critically.
The volume of data combined with time constraints ensures that we need to rely on the medical hierarchy to help manage the information overload required to apply science and evidence based therapy. There is just to much data for one tiny brain to consume. Other doctors rely on me for the diagnosis and treatment of odd infections. In turn, I rely on the published knowledge and experience of my colleagues who have devoted a career to one aspect of infectious diseases. There is little time for most doctors to read all the medical literature carefully, and usually little need. We have people and institutions we use as surrogates.
Not only is critical thinking usually not required to be a good physician, but medical practice can conspire to give physicians a false sense of their own abilities. Really. Some doctors have an inflated sense of self worth. Who would have thought it? Spend time with some doctors and listen to them pontificate on politics or economics with the same (false) assurance that have in their true field of expertise, and you will run screaming from the room. (more…)
Mea culpa to the max. I completely forgot that today is my day to post on SBM, so I’m going to have to cheat a little. Here is a link to a recent article by yours truly that appeared on Virtual Mentor, an online ethics journal published by the AMA with major input from medical students. Note that I didn’t write the initial scenario; that was provided to me for my comments. The contents for the entire issue, titled “Complementary and Alternative Therapies—Medicine’s Response,” are here. Check out some of the other contributors (I was unaware of who they would be when I agreed to write my piece).
First Oz, now this. Too bad his appearance was so short:
At least they got Banachek to do a quick and dirty trial that helped to demonstrate that these bracelets do not work.
I saw a patient recently for parasites.
I get a sinking feeling when I see that diagnosis on the schedule, as it rarely means a real parasite. The great Pacific NW is mostly parasite free, so either it is a traveler or someone with delusions of parasitism.
The latter comes in two forms: the classic form and Morgellons. Neither are likely to lead to a meaningful patient-doctor interaction, since it usually means conflict between my assessment of the problem and the patients assessment of the problem. There is rarely a middle ground upon which to meet. The most memorable case of delusions of parasitism I have seen was a patient who I saw in clinic who, while we talked, ate a raw garlic clove about every minute.
“Why the garlic?” I asked.
“To keep the parasites at bay,” he told me.
I asked him to describe the parasite. He told me they floated in the air, fell on his skin, and then burrowed in. Then he later plucked them out of his nose.
At this point he took out a large bottle that rattled as he shook it.
“I keep them in here,” he said as he screwed off the lid and dumped about 3 cups with of dried boogers on the exam table.
To my credit I neither screamed nor vomited, although for a year I could not eat garlic. It was during this time I was attacked by a vampire, and joined the ranks of the undead. (more…)
This essay is the latest in the series indexed at the bottom.* It follows several (nos. 10-14) that responded to a critique by statistician Stephen Simon, who had taken issue with our asserting an important distinction between Science-Based Medicine (SBM) and Evidence-Based Medicine (EBM). (Dr. Gorski also posted a response to Dr. Simon’s critique). A quick-if-incomplete Review can be found here.
One of Dr. Simon’s points was this:
I am as harshly critical of the hierarchy of evidence as anyone. I see this as something that will self-correct over time, and I see people within EBM working both formally and informally to replace the rigid hierarchy with something that places each research study in context. I’m staying with EBM because I believe that people who practice EBM thoughtfully do consider mechanisms carefully. That includes the Cochrane Collaboration.
To which I responded:
We don’t see much evidence that people at the highest levels of EBM, eg, Sackett’s Center for EBM or Cochrane, are “working both formally and informally to replace the rigid hierarchy with something that places each research study in context.”
Well, perhaps I shouldn’t have been so quick to quip—or perhaps that was exactly what the doctor ordered, as will become clear—because on March 5th, nearly four months after writing those words, I received this email from Karianne Hammerstrøm, the Trials Search Coordinator and Managing Editor for The Campbell Collaboration, which lists Cochrane as one of its partners and which, together with the Norwegian Knowledge Centre for the Health Services, is a source of systematic reviews:
Not only his name and his titles of nobility were forged, but parts of the teachings of the man who introduced acupuncture to Europe were also invented. Even today, treatments are provided based on his fantasies.
– Hanjo Lehmann1
Decades before President Nixon’s visit to communist China, and before the articles in the Western popular press on the use of acupuncture in surgery, a Frenchman by the name of George Soulié de Morant (1878-1955), published a series of colorful accounts of the use of acupuncture in early 20th-century China. His work led to the creation of a school of thought known as “French energetics,” which has become the theoretical foundation for many proponents of acupuncture in the West, including Joseph Helms, MD, the founder and former director of the American Academy of Medical Acupuncture (AAMA), and the founder of the acupuncture certification course for physicians.
But just as the medical community gradually learned that the reports of the use of acupuncture in surgery in communist China were inaccurate, exaggerated, or even fraudulent, we are now learning that the reports on the use and efficacy of acupuncture by Soulié de Morant were also fabricated.
According to a 2010 article published in Germany by Hanjo Lehmann in the Deutsches Ärzteblatt (a short version was published in Süddeutsche Zeitung), there is no real evidence that the Frenchman who is considered the father of Western acupuncture ever stuck a needle in anyone in China, and he probably never witnessed a needling.
The word “frequency” ranks right up there with “quantum” and “energy” as a pseudoscientific buzzword. It is increasingly prevalent in product advertisements and in CAM claims about human biofields and energy medicine. It doesn’t mean what they think it means.
I have written about Power Balance products, the wristbands and cards that allegedly improve sports performance through frequencies embedded in a hologram. They amount to nothing but a new version of the old rabbit’s foot carried for superstition and their sales demonstrations fool people with simple musculoskeletal tricks. I addressed their ridiculous claims (including “We are a frequency”). I pointed out that
The definition of frequency is “the number of repetitions of a periodic process in a unit of time.” A frequency can’t exist in isolation. There has to be a periodic process, like a sound wave, a radio wave, a clock pendulum, or a train passing by at the rate of x boxcars per minute. The phrase “33⅓ per minute” is meaningless: you can’t have an rpm without an r. A periodic process can have a frequency, but an armadillo and a tomato can’t. Neither a periodic process nor a person can “be” a frequency.
Steven Salzberg, a friend of this blog and Director of the Center for Bioinformatics and Computational Biology at the University of Maryland, is on the editorial boards of three of the many journals published by BioMed Central (BMC), an important source of open-access, peer-reviewed biomedical reports. He is disturbed by the presence of two other journals under the BMC umbrella: Chinese Medicine and BMC Complementary and Alternative Medicine. A couple of days ago, on his Forbes science blog, Dr. Salzberg explained why. Here are some excerpts:
The Chinese Medicine journal promotes, according to its own mission statement, studies of “acupuncture, Tui-na, Qi-qong, Tai Chi Quan, energy research,” and other nonsense. Tui na, for example, supposedly “affects the flow of energy by holding and pressing the body at acupressure points.”
Right. What is this doing in a scientific journal?… I support BMC…But their corporate leaders seem to care more about expanding their stable than about maintaining the integrity of science. Chinese Medicine simply does not belong in the company of respectable scientific journals.
Forming a scientific journal whose goal is to validate antiquated, unproven superstitions is simply not science, whatever the editors of Chinese Medicine claim.
BMC should be embarrassed to be publishing journals that promote anti-scientific theories and otherwise muddy the literature. By supporting these journals, they undermine the credibility of many excellent BMC journals. They should cut these journals loose.
NB: This is a partial posting; I was up all night ‘on-call’ and too tired to continue. I’ll post the rest of the essay later…
This is the fourth and final part of a series-within-a-series* inspired by statistician Steve Simon. Professor Simon had challenged the view, held by several bloggers here at SBM, that Evidence-Based Medicine (EBM) has been mostly inadequate to the task of reaching definitive conclusions about highly implausible medical claims. In Part I, I reiterated a fundamental problem with EBM, reflected in its Levels of Evidence scheme, that although it correctly recognizes basic science and other pre-clinical evidence as insufficient bases for introducing novel treatments into practice, it fails to acknowledge that they are necessary bases. I explained the difference between “plausibility” and “knowing the mechanism.”
I showed, with several examples, that in the EBM lexicon the word “evidence” refers almost exclusively to the results of clinical trials: thus, when faced with equivocal or no clinical trials of some highly implausible claim, EBM practitioners typically declare that there is “not enough evidence” to either accept or reject the claim, and call for more trials—although in many cases there is abundant evidence, other than clinical trials, that conclusively refutes the claim. I rejected Prof. Simon’s assertion that we at SBM want to “give (EBM) a new label,” making the point that we only want it to live up to its current label by considering all the evidence. I doubted Prof. Simon’s contention that “people within EBM (are) working both formally and informally to replace the rigid hierarchy with something that places each research study in context.”
In Part II I responded to the widely held assertion, also held by Prof. Simon, that there is “societal value in testing (highly implausible) therapies that are in wide use.” I made it clear that I don’t oppose simple tests of basic claims, such as the Emily Rosa experiment, but I noted that EBM reviewers, including those employed by the Cochrane Collaboration, typically ignore such tests. I wrote that I oppose large efficacy trials and public funding of such trials. I argued that the popularity gambit has resulted in human subjects being exposed to dangerous and unethical trials, and I quoted language from ethics treatises specifically contradicting the assertion that popularity justifies such trials. Finally, I showed that the alleged popularity of most “CAM” methods—as irrelevant as it may be to the question of human studies ethics—has been greatly exaggerated.
This is the third post in this series*; please see Part II for a review. Part II offered several arguments against the assertion that it is a good idea to perform efficacy trials of medical claims that have been refuted by basic science or by other, pre-trial evidence. This post will add to those arguments, continuing to identify the inadequacies of the tools of Evidence-Based Medicine (EBM) as applied to such claims.
Prof. Simon Replies
Prior to the posting of Part II, statistician Steve Simon, whose views had been the impetus for this series, posted another article on his blog, responding to Part I of this series. He agreed with some of what both Dr. Gorski and I had written:
The blog post by Dr. Atwood points out a critical distinction between “biologically implausible” and “no known mechanism of action” and I must concede this point. There are certain therapies in CAM that take the claim of biological plausibility to an extreme. It’s not as if those therapies are just implausible. It is that those therapies must posit a mechanism that “would necessarily violate scientific principles that rest on far more solid ground than any number of equivocal, bias-and-error-prone clinical trials could hope to overturn.” Examples of such therapies are homeopathy, energy medicine, chiropractic subluxations, craniosacral rhythms, and coffee enemas.
The Science Based Medicine site would argue that randomized trials for these therapies are never justified. And it bothers Dr. Atwood when a systematic review from the Cochrane Collaboration states that no conclusions can be drawn about homeopathy as a treatment for asthma because of a lack of evidence from well conducted clinical trials. There’s plenty of evidence from basic physics and chemistry that can allow you to draw strong conclusions about whether homeopathy is an effective treatment for asthma. So the Cochrane Collaboration is ignoring this evidence, and worse still, is implicitly (and sometimes explicitly) calling for more research in this area.
On the other hand:
There are a host of issues worth discussing here, but let me limit myself for now to one very basic issue. Is any research justified for a therapy like homeopathy when basic physics and chemistry will provide more than enough evidence by itself to suggest that such research is futile(?) Worse still, the randomized trial is subject to numerous biases that can lead to erroneous conclusions.
I disagree for a variety of reasons.