Deadly Indeed

There are sources of information I inclined to accept with minimal questioning.  I do not have time to examine everything in excruciating detail, and like most people, use intellectual short cuts to get through the day.  If it comes from Clinical Infectious Diseases or the NEJM, I am inclined to accept the conclusions without a great deal of analysis, especially for non-infectious disease articles.  Infectious disease publications I have to read more closely; its part of passing as an expert.

Outside of medicine, I am predisposed to accepting at face value many of the articles in Skeptic and Skeptical Inquirer. They are trusted sources.  Some topics, like haunted house or Big Foot investigations, I barely skim. After all these years, I doubt there will be any new insights into the subject.  Other topics, depending on my interest, I may read more carefully.

I  often read longer articles  many times.  First a quick skim to see if it offers anything of interest.  If it does, then I may read it carefully.

This months Skeptical Inquirer had an article called  Seven Deadly Medical Hypotheses by Reynold Spector.  Just seeing the title and knowing the magazine, I was primed to accept the content at face value.  I enjoy a well reasoned, thoughtful rant. I relish a clever diatribe, even if I do not agree with the topic.   So I gave it a quick skim.  I was discomfited.  My first gut check was ick.  But I was uncertain why.  So I read it slowly and carefully, and still ick.  But why?There is a degree of self absorption in being a blogger.  I can write about what I want any way I want (I remain amazed at how much I can get away with).  The process of writing about a topic helps me clarify in my own mind issues with articles.

The author of 7, as I shall refer to the article,  has over 200 published articles, is a former executive vice president in charge of drug development at Merck and oversaw the development of 15 drugs and vaccines.  I am nobody from nowhere who just takes care of infected patients for a living.  He wins the argument from authority; I am the E. coli evaluating the human.  Oh well,  this is more an exercise for me to enlighten myself; you are the innocent bystander.

Overall the tone of 7 ? It reminded me of the Health Ranger at Really.  Lots of dramatic statements, no qualifiers, no buts, no subtlety, no nuance.  To me, what marks good medical writing is an understanding that there is far more grey than black and white and that generally people are doing the best they can within numerous limitations  One of the many characteristics of the Health Ranger is hyperbole without nuance.  The Health Ranger has a belief system and sees the medical industrial complex through that lens; information is used to support a predetermined conclusion.  Health Ranger is a bombastic style that is both self assured and self referential.

Let us see what 7 has to say. It begins

A chronic scandal plagues the medical and nutritional literature: much of what is published is erroneous, pseudoscientific, or worse.

I’ll grant the first.  I am an Ioannidis convert.  The second seems hyperbole and exaggeration. Pseudoscientific?  Like homeopathy, psi and astrology?  Sorry The author is 17 words in and he has lost me.  I already question his veracity and judgement.  I read the literature. Hundreds of papers a month. I know the literature, and Sir, it is not pseudoscientific.  Suboptimal, often, but not pseudoscientific.  The third?  What could be worse than pseudoscientific?  Oh yeah.  Wakefields Lancet article.  But fraud  is a very rare exception in the over 20 million references on Pubmed.  The author’s opening salvo strikes me as someone more interested in polemic than truth. If done with verve and panache, and above all wit, I like a good polemic. Pomposity with hyperbole, not so much, and calling the medical literature erroneous, pseudoscientific, or worse leans towards the latter.

Two major factors account for a large proportion of this problem.  First, many medical and nutritional hypothesis are ill-conceived.

Are they?  Over 20 million references in Pubmed.  A few, perhaps, were ill conceived before they are tested. Say, measles vaccine induced gastroenteritis causing autism?  Not even that.  If approached honestly and competently, it would be a long shot, but you never knows unless you look.   That is what a great deal of medical research is about: looking around to see if an etiology or intervention or medication will be effective.  Most ideas, I would guess, go nowhere.

Second, the methods used are often epistemologically unsound.

Got me there.  What is epistemologically unsound?  Even after looking epistemologically up on the interwebs, I am uncertain what it means.   I expect the comments will school me on the meaning of epistemologically unsound. I guess that is why I am a lowly clinician.

Moreover, the same unsound methods are often repeated multiple times on the same tired hypotheses with the same incorrect results.

Isn’t that three major factors?  Or is that the unsound epistemological I cannot understand?  I shouldn’t quibble about counting, but I feel a rising tide of ridicule and scorn, and I am not one to hold it back.

I am not even done with the first paragraph, and the author has epistemologically lost me. Maybe there is good reason to be unsettled with the article.   And in the first five sentences, there are four references, all to works by the author, to justify the position.  I tend to prefer external references in my literature; the hyperbolic self-validation is what I expect from the Health Ranger and his ilk.  But again, who am I to question (1)?

… there is an epidemic of published studies that do not follow the principles of sound medical science- the principles demanded by the US Food and Drug Administration for the licensure and sale of medications.

Well, most studies are preliminary and exploratory.  The rigor demanded by the FDA is the final step in a long process starting with basic principles and, perhaps, epidemiology.  I can’t imagine we should jump to huge randomized, placebo controlled trials for every therapy and to answer every question.  Seems a wee bit excessive to me. Start small and build.  The downside is that there will be dead ends and false conclusions.  The upside is that in the end, a close approximation of Truth will be determined.

The resulting “findings” of such misleading or erroneous studies are often hyped by the news media on the day they are reported or published without any additional, careful analysis.

Hyped “findings”?  Nothing like that in the first two paragraphs of the this essay. Nope.  Nothing to see there but a well reasoned, careful, nuanced prologue for the body of the  essay.  “I” am always “mistrustful” of people who use “quotes” as a form of “sarcasm” when sarcasm is not used for good “effect” like “humor” because it otherwise comes across as “supercilious”.

Now I am starting to understand my discomfiture. Still, that’s just the first two paragraphs.  The body will better, right?

The author then proceeds to the background of how to do a good study: generate a plausible, testable hypothesis and test it.  He uses the Scandinavian Simvastatin Survival Study as an example of medicine done right (a Merck product if you care) and bemoans that not every study meets this high standard.

Too many published studies fail to adhere to these high scientific standards and lead to faulty, and even dangerous, conclusions.

Which is true and to my mind understandable, since there are not the resources to do perfect studies of every hypothesis.   Not every car is a Lexus, not every restaurant has a Michelin three star rating. You can’t always get what you want (2).  The issue to my mind is not that there are suboptimal studies; they are often used to find search for hypothesis that can be tested in better trials.  A large part of research is flailing about looking for something interesting to investigate in further detail.  Not everyone has the resources to test everything using the ‘hypothetical/deductive method” to answer all out questions, like the FDA demands.  Although this is not always the preferred method of generating ideas to test.  I don’t need quotes to cast aspersions on the validity of information or generate guilt by association.  I have learned a thing or two from reading the Health Ranger.

I wonder how many suboptimal studies it required to get to the point of the Scandinavian Simvastatin Survival Study?  The concepts to be tested did not appear from the void, fully formed.  The author does not, as will be seen, pay attention to the history and context of the evolution of medical ideas.

The author then proceeds to his 7 deadly hypotheses. Well, one deadly,  6 not so much.  But guilt by association is a game played by the author of 7 as well.

1) the investigator does not need a specific hypothesis and/or can use an inadequate method to test hypothesis.

He uses the example of epidemiology generated by case-control and cohort studies (the kind of studies that lead to the simvastatin study) and the effects of hormone replacement therapy.  He points out that these epidemiologic studies, for a variety of reasons, can lead to erroneous conclusions. Fine.  The other option?  With no preliminary studies jump straight to a huge trial?  And sometimes epidemiology can lead to important results: that a certain water pump is the epicenter of cholera or that chimney sweeps have more testicular cancer. Or that lowering cholesterol is associated with a decrease in vascular deaths.

Epidemiology is part of a continuum of understanding and evolution of medical knowledge.  But strawmen are easier to burn than recognizing the stuttering, somewhat chaotic progress of medical knowledge.  If proving a point is more important than understanding complexity, this is how you argue.

He then proceeds to genome-wide association studies (GWAS) that have been a disappointment for  elucidating genetic causes of heart disease and Alzheimer’s. The author considers GWAS a failure.  I suppose if you have a narrow perspective, yes it has been a failure. So far.  Huge amounts of information about the genome have been generated, and I am always a fan of knowledge for knowledge sake.  In the world of infectious diseases, there are single gene polymorphisms in the immune system that can increase or decrease a patients risk for a variety of infections.  Is it of clinical relevance yet? No.  Is it interesting? Oh, yes.  Will it lead to a new treatments and diagnostic interventions in the future? Who knows. But trying new ideas may fail but still  lead to insights that may lead to better interventions. I would wonder what secondary advances in technology and understanding were accomplished as a results of the GWAS studies.

It is like complaining that the Apollo program only put 12 people on the moon so the program is a bust since we are not going to the moon for vacation.  Here is a dirty little secret from a mere clinician.  I learn far more from failure than I ever have from success.  “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ (I’ve found it!), but ‘That’s funny…’ -Isaac Asimov.”  If you are a clinician, it is not ‘That’s funny,’ but ‘Oh shit’ that really drives change and knowledge.

2) If women replace these missing hormones post menopausally with HRT, they will remain “youthful” and not suffer from heart disease, dementia, vaginal dryness, hot flashes, and fractured bones.

I remember the late ’80’s,  a time that was the heyday of HRT, when I was in my internal residency training and discussing the issues at length not only in clinic, but with my mother.  I remember discussing the epidemiologic data and the worries of cancer.  The author states that

…based on these  (biased) studies, false claims were made the HRT protected against cardiovascular disease and dementia.

As if we knew it was false at the time. It was the best guess based on the data, and epidemiology can give insights that can be later confirmed  by better studies.  He also says

“the proponents…ignored the well-documented fact that estrogen is a carcinogen that causes breast cancer that can kill women” and that “HRT caused a 25% increase in breast cancer.”

I do not know where the author was practicing, but I remember talking with patients (I know, flawed memory) and my mother about the relative risks of cancer and fracture from HRT.  And 25%. Increase.  That’s bad.

What was the study?  On “16 608, patients, there were  more invasive breast cancers compared with placebo (385 cases [0.42% per year] vs 293 cases  [0.34% per year]…and the estrogen group had higher mortality (25 deaths [0.03% per year] vs 12 deaths [0.01% per year].”

That is bad.  Equally bad was the way the author presented the data, the same author who complains in the opening paragraphs about complex data being presented as looking “superficially adequate to the unsophisticated reader,” but I know when someone is presenting information in an manipulating manner designed to blow smoke out a usually inaccessible area.

In a section worthy of the Vaccine Council or Dr. Mercola, it sounds like people deliberately ignored cancer risk to push estrogen to kill women.  Someone mention hype?  I know it is important to make a point, but those who were investigating HRT and prescribing it, as I did once upon a time, were doing it carefully and with knowledge that there could be risks.

Information does not exist in a vacuum.  When talking with my patients and Mom in the late 1980’s, I basically said, based on the odds, how do you want to live your life?

Lifetime risk is a useful way to estimate and compare the risk of various conditions. Hip fractures, Colles’ fractures, and coronary heart disease, and breast and endometrial cancers are important conditions in postmenopausal women that might be influenced by the use of hormone replacement therapy. We used population-based data to estimate a woman’s lifetime risk of suffering a hip, Colles’, or vertebral fracture and her risk of dying of coronary heart disease. A 50-year-old white woman has a 16% risk of suffering a hip fracture, a 15% risk of suffering a Colles’ fracture, and a 32% risk of suffering a vertebral fracture during her remaining lifetime. These risks exceed her risk of developing breast or endometrial cancer. She has a 31% risk of dying of coronary heart disease, which is about 10 times greater than her risk of dying of hip fractures or breast cancer. These lifetime risks provide a useful description of the comparative risks of conditions that might be influenced by postmenopausal hormone therapy.

That was the kind of information and conversations about HRT I was having with patients in my clinic as I completed my residency, the years the author was at Merck developing drugs.  Many patients were far more worried about the disability and pain of fractures than they were of breast cancer.

In continued hyperbole that is totally disconnected from what I remember, he calls HRT a “flagrant example of the harm done by straying from the principles of hypothetical/deductive approach and sound clinical science.”

Really?  Did this guy ever take care of patients?  Has he ever had to make decisions based on incomplete information?  We are only into number two of seven and he last lost me with the hysteria.  I wonder how he would suggest exploring the effects of waning estrogen on the health of women?  Jump straight to a large trial?  Do no preliminary work?  Ignore any potential leads?  What is the alternative to the incremental, and sometimes erroneous, results of medical understanding?  How about fluoride and tooth decay?  So many insights start with a guess and a little epidemiology. Sometimes it pans out, sometimes it doesn’t.  But you do not know unless you try.

3)  if small dosages of vitamins are good for humans, very large doses would be better for everyone.

He then notes the studies that show the hypothesis was wrong.  But this was only known after the fact, after the studies,  and perhaps using vitamins like drugs would have beneficial effects.

Then the odd summary: ” megavitamin therapy tested in properly controlled trials either does nothing or is harmful (except in a few well defined exceptions).”

So it does nothing except when it does.  And how would we know the well defined exceptions unless we did the trials?

He goes from complaining about the science to complaining about the regulatory and commercial issues of megavitamins, changing arguments in midstream.  Is it the science or how the science is used?  Two different issues.

This is getting tedious, even for me.  I will soldier on, although the re re re reading 7 is increasingly painful. The closer I read it, the greater the errors and manipulations; a Mandelbrot set of manipulative medical writing.  Soon I will find the indefinite articles and pronouns suspect.  I try to skim the Health Ranger for a few chuckles; that is not why I read SI. And when is their swimsuit issue?  Oh. Wrong SI.

4) Screening tests beyond the standard medical examination are necessary for identifying disease and the risk of disease in apparently healthy, asymptomatic adults.

I will leave this issue to the more knowledgeable hands of Dr. Gorski.  His argument seems to be based on the 20:20 vision of hindsight, which is apparently the primary argument in all seven cases.  We thought screening would be effective,  studies showed it wasn’t, so the hypothesis was flawed and we should not have suggested screening or done the studies.

The author does not show in this, or other examples, why the ideas were wrong in the context of time the ideas were first offered. It is only viewed through the all powerful retrospectoscope that the author finds his deadly hypothesis. It is ever so easy to predict the past.

He also seems to argue that since our understanding of the ramifications of screening are not perfect, they are suspect, referencing himself for issues with PSA and mammograms (1).  The author argues in part that since our understanding is imperfect, it is a deadly hypothesis. I have always been comfortable with making decisions based on incomplete information, as that is the only kind of clinical information we ever have, save for the results of the occasional autopsy.  The perfect always being the enemy of the good.

He also complains about genetic screening. He notes that few people with high risk genes will develop disease and they can’t do anything about it, so why bother?  I wonder if the author has had much direct patient care.  What  most patients dislike is uncertainty about the why of their disease and most prefer as much understanding and certainty about their health as they can gather.  That is why they bother. And todays why bother may be tomorrows critical insight.  I have discussed how the show Connections made an impact on my view of the serendipity underlying advances. It may not be cost effective or useful currently,  the author does note that for some patients (breast cancer) it may have utility. Again, it is a deadly hypothesis except when it isn’t. So much sound and fury.

But how do you know until after you have done all the studies and see what works and what doesn’t?  His argument still seems to be since in some patients genetic testing has been shown to be of no utility, in the past they should not have done the work to show it is not useful. Except where it is.  Sort of like going back in time to kill Hitler as a child because he was found to be evil in the future, even though you could not tell that the babe in the crib was going to be the source of Goodwin’s law. And far worse.

Circular argument much?

I do not get the impression the author is one for thinking outside the box.  Usually new ideas lead nowhere, but again, you never know unless you try. Nothing ventured, nothing gained vrs nothing ventured, nothing lost.  It is often not the results of studies that are the issue, but how they are portrayed in the media, as noted by the author, and, probably not intentionally, his entire article is a superb example of just that concept.  Maybe 7 is really meta.

5) Manipulating one’s nutrition can prevent cancer.

As he says,  “In retrospect,  this hypothesis does not seem plausible.”

The whole crux of almost every one of his arguments. Repeat after me. In retrospect. In retrospect. In retrospect.  In retrospect everything is clear.  I have had MD after my name for 27 years, and I remember the uncertainty and interest in all his 7 mostly not so deadly hypotheses.  In the beginning, it was not so clear as he makes it out to be.  The past is easy to predict.

6)  Personalized medicine will greatly advance medical care.

His argument is the same: it hasn’t worked except where it has.

“Personalized medicine has only been shown to be cost effective in a few well defined situations.”

How did we find these well defined situations?  Doing a ton of studies that show benefit in some cases and none in others.

I think the solution to this problem is being able to see the future and know in advance which research ideas will bear fruit and which will be a bust.  Precognition is apparently the only solution. Miss Cleo may be available to help review research proposals, I understand that her readin’ is free.

7) cancer chemotherapy has been a major medical advance.

Of course, in some cases it has been extremely effective, but the war on cancer has not been what it was promised.  Again it seems his argument is the same hindsight argument:  when cancer therapy has been effective, it is great, and when it is not so good, we should not have done the work to show that wasn’t effective.   Again, I leave the details to Dr. Gorski should he choose to cover the topic.

And of course the author doesn’t have a dog in the fight (and there are those quotes, so commonly used by the dispassionate):

“When one dispassionately weighs the minimal prolongation of ‘good’ life in patients with metastatic cancer versus the very distressing side effects of chemotherapy with ‘targeted’ drugs, the case is close.”

I’m convinced,  He is dispassionate.  And Jenny isn’t anti-vax, just pro-safe vaccine. Here is my hypothesis to be tested.  Anyone who argues they are dispassionate isn’t. They are fooling themselves and trying to fool others with their alleged practice of arei’mnu.  Me? I am never dispassionate; although sometimes I do not care, but there is a difference.

Some of his conclusions are reasonable: we need to do our science as best as we can.

The author argues that all these errors and  expenditures of his 7 mostly no so deadly hypotheses could have “been avoided if the hypothetical/deductive method had been applied rigorously.”  I am not convinced, since most of his arguments are based after the fact.  I would be far more impressed if, by using only the hypothetical/deductive approach (no epidemiology, no early studies, no preliminary clinical data, no basic science) if he would predict 7 hypotheses that warrant jumping straight to large, randomized, placebo controlled clinical trials so beloved by the FDA. The Randi prize awaits.

We all need that god like perfection and prescience, unlike those

“guilty of perpetuating worthless practices include “scientists” who repeatedly employ flawed methods and then publish them, government agencies who fund such practices, editors of journals that publish pseudoscience, the USDA and NCI bodies that perpetuate unscientific regimens…”

My. God.  The Health Ranger was right.  The conspiracy has incorporated itself into every aspect of the Medical-Industrial  complex.  A different conspiracy than the one we get from the woo world, but  everyone is involved.

Putting scientists in quotes. A very Health Ranger thing to do.  I don’t suppose he is referring to the “scientists” at Merck who repeatedly employed flawed methods and then published them.

“Approximately 250 documents were relevant to our review. For the publication of clinical trials, documents were found describing Merck employees working either independently or in collaboration with medical publishing companies to prepare manuscripts and subsequently recruiting external, academically affiliated investigators to be authors. Recruited authors were frequently placed in the first and second positions of the authorship list. For the publication of scientific review papers, documents were found describing Merck marketing employees developing plans for manuscripts, contracting with medical publishing companies to ghostwrite manuscripts, and recruiting external, academically affiliated investigators to be authors. Recruited authors were commonly the sole author on the manuscript and offered honoraria for their participation…

This case-study review of industry documents demonstrates that clinical trial manuscripts related to rofecoxib were authored by sponsor employees but often attributed first authorship to academically affiliated investigators who did not always disclose industry financial support. Review manuscripts were often prepared by unacknowledged authors and subsequently attributed authorship to academically affiliated investigators who often did not disclose industry financial support.”

I see people doing the best they can with the tools at hand.  Mostly honest people (I say mostly not knowing what their IRS forms show), working within many limitations, to advance medical understanding.  They do not deserve quotes applied to their work or the title of pseudoscience.  Not everyone is able to achieve the peerless, perfect knowledge bestowed on  a Professor of Medicine and Merck Vice President.

We need “honest” corporations.  Ironic from a former Merck executive;  casting the first stone and all that. I do not need quotes to show my snotty superiority.  We need better regulation of “unsafe and unproven products.”  Like Merck’s Vioxx?.   Ohhh, snap. The Merck shots are cheap shots,  I know. But they made me laugh, and above all I like to make me laugh. It is all about me.

Like the Health Ranger, I see someone with a bee in their bonnet, selectively and histrionically arguing in circles, hoping that if the same cognitive errors and circular reasoning are repeated they will be believed as fact.  I am not enthusiastic about the conclusions and arguments used, being significantly more flawed than the research he rails against. It is not far in style and content from being in the Natural News.  Science, at least,  is ultimately self correcting.  This article, probably not so much.

Of course, I am nobody from nowhere. Not a professor or scientist or a vice president.  I am a clinician and citizen who has to trust his sources of information.  I was raised to judge a man by the company he keeps.  When the NEJM published garbage on acupuncture, my trust in the Journal fell a notch.  The Lancet has always had a reputation of being flaky, it is part of the British charm and I have never held it against them; I just factor it in when reading a paper.  The Annals of Internal Medicine has been untrustworthy for years. Clinical Infectious Diseases remains unsullied.  Now the Skeptical Enquirer (sic) has slipped a bit as well.  7 was primarily deadly for my confidence in its editors. Oh well, at least I can still trust the material published by DC.


(1)  Crislip et. al.  I said it here before, so it must be right.

(2) And if you try sometime you find/You get what you need.

Posted in: Epidemiology, Science and Medicine, Science and the Media

Leave a Comment (43) ↓