Shares

A couple of weeks ago, our resident skeptical medical student Tim Kreider wrote an excellent article about an op-ed in NEWSWEEK by science correspondent Sharon Begley, in which he pointed out many misconceptions she had regarding basic science versus translational research, journal impact factors, and how journals actually determine what they will publish. Basically, her thesis rested on little more than a few anecdotes by scientists who didn’t get funded or published in journals with as high an impact factor as they thought they deserved, with no data, science, or statistics to tell us whether the scientists featured in her article were in fact representative of the general situation. Begley’s article caught flak from others, including Mike the Mad Biologist and our very own Steve Novella. Naturally, as the resident cancer surgeon and researcher, I had thought of weighing in, but other issues interested me more at the time.

In retrospect, I rather regret it, given that this issue crops up time and time again. In essence, it’s a variant of the lament that pops up in the press periodically, when science journalists look at survival rates for various cancers and ask why, after nearly 40 years, we haven’t yet won the war on cancer. Because of his youth, Tim probably hasn’t seen this issue crop up before, but, trust me, every couple of years or so it does. Begley’s article and the NYT article strike me as simply “Why are we losing the war on cancer?” 2009 edition.

Now the New York Times has given me an excuse both to revisit Begley’s article and discuss yesterday’s front page article in the NYT Grant System Leads Cancer Researchers to Play It Safe. Basically, they are variants of the same complaints I’ve heard time and time again. Now, don’t get me wrong. By no means am I saying that the current system that the NIH uses to determine which scientists get funded. Those who complain that the system is often too conservative have a point. The problem, all too often, however, is that the proposals for how to fix the problem are usually either never spelled out or rest on dubious assumptions about the nature of cancer research themselves.

Sharon Begley’s lament: Those nasty scientists hinder translational research

Earlier this month, Begley produced an article that, quite frankly, annoyed the crap out of me, called From Bench To Bedside: Academia slows the search for cures. It never ceases to amaze me how some pundits can take an enormously flawed idea as to why a problem exists and run right off the cliff with it.

Begley begins by pointing out that President Obama has not yet appointed a Director of the NIH. That’s a fair enough criticism. Personally, I’m not happy that there’s no permanent NIH Director yet. I’d like to think, as Begley hopes, that it’s because Obama realizes how important this pick is and wants to get it right. But that’s about all I agree with Begley on. After that introduction, she runs straight off the cliff:

NIH has its work cut out for it, for the forces within academic medicine that (inadvertently) conspire to impede research aimed at a clinical payoff show little sign of abating. One reason is the profit motive, which is supposed to induce pharma and biotech to invest in the decades-long process of discovering, developing and testing new compounds. It often does. But when a promising discovery has the profit potential of Pets.com, patients can lose out. A stark example is the work of Donald Stein, now at Emory University, who in the 1960s noticed that female rats recovered from head and brain injuries more quickly and completely than male rats. He hypothesized that the pregnancy hormone progesterone might be the reason. But progesterone is not easily patentable. Nature already owns the patent, as it were, so industry took a pass. “Pharma didn’t see a profit potential, so our only hope was to get NIH to fund the large-scale clinical trials,” says Stein. Unfortunately, he had little luck getting NIH support for his work (more on that later) until 2001, when he received $2.2 million for early human research, and in October a large trial testing progesterone on thousands of patients with brain injuries will be launched at 17 medical centers. For those of you keeping score at home, that would be 40 years after Stein made his serendipitous discovery.

Whenever I see a story like this, I always wonder exactly why it took so long to move an idea from concept to clinical trial to clinical use. Indeed, recently John Ioannidis (who is most famous for an article a couple of years ago entitled Why Most Published Research Findings Are False, which Steve blogged about at the time it was first published) published a study that showed that it takes between 14 and 44 years for an idea to make it “from bench to bedside.” In any case, when in doubt, do a PubMed search to see what the person describing his research has published. So I did just that for Dr. Stein. He has a healthy publication record (162 publications), as well as a number of publications from the late 1960s on brain injury in rodent models. Clearly, Dr. Stein has been a successful and well-funded rsearcher. However, when I searched his name and “progesterone,” I didn’t find a single publication until 2006. So I dug a little deeper, and the first paper I could find by him postulating a sex difference in healing after head injuries was published in 1987. In 1986, he coauthored a review in Nature on the pharmacological attenuation of brain injury after trauma, and didn’t once mention progesterone. The point here is not to cast doubt on Dr. Stein’s contention that he first noticed this finding in the 1960s, but rather to point out that it’s hard for me not to wonder whether this particular line of research was a high priority in his career, because he doesn’t appear to have published on it for 20 years and didn’t really start doing a lot of work on it until the last few years, with a flurry of interesting publications since 2006.

The other point is, as I have said time and time again, that a scientist can’t just go to human studies (unless, of course, one believes animal rights activists who deny that animal research contributes anything to medical advancements). There has to be solid preclinical evidence. In other words, there has to be a lot of cell culture, biochemical, and animal work that all support your hypothesis, and it can take a minimum of several years to develop that evidence. Medical ethics and the Helsinki Accord demand it. Moreover, the sort of preclinical work that would have been needed to lay the groundwork for clinical trials of progesterone as a neuroprotective agent in trauma is exactly the sort of research that the NIH has funded all these years. One wonders why Dr. Stein, who clearly has a well-funded lab, didn’t divert a bit of that funding earlier to do some pilot experiments to use to pursue NIH funding. Maybe he didn’t have enough extra funds lying around or couldn’t find a way to relate the project to one of his existing projects sufficiently to justify doing so. In any case, at the risk of sounding too harsh, I will say that the whole big pharma thing struck me as very self-serving. Whatever the case was, I strongly suspect that the full story is far more complicated than the “big pharma won’t fund it because it can’t patent it” hyperbole that Begley is laying down (and that sounds very much like the same sorts of excuses purveyors of “natural” therapies use to justify why they don’t do any research to show that their “cures” work).

But that’s not what irritated me the most about Begley’s article. This is:

The desire for academic advancement, perversely, can also impede bench-to-bedside research. “In order to get promoted, a scientist must publish in prestigious journals,” notes Bruce Bloom, president of Partnership for Cures, a philanthropy that supports research. “The incentive is to publish and secure grants instead of to create better treatments and cures.” And what do top journals want? “Fascinating new scientific knowledge, [not] mundane treatment discoveries,” he says. Case in point: in research supported by Partnership for Cures, scientists led by David Teachey of Children’s Hospital of Philadelphia discovered that rapamycin, an immune-suppressing drug, can vanquish the symptoms of a rare and sometimes fatal children’s disease called ALPS, which causes the body to attack its own blood cells. When Teachey developed a mouse model to test the treatment, he published it in the top hematology journal, Bloodin 2006.

A brief aside: Wow. Surgeon that I am, I didn’t know that Blood was such a top tier journal. The reason I’m amazed is that I published in Blood last year. If Blood will take one of my manuscripts, it can’t be that awesome, can it? (Cue false modesty.) Now, back to Begley:

But the 2009 discovery that rapamycin can cure kids with ALPS? In the 13th-ranked journal. The hard-core science was already known, so top journals weren’t interested in something as trivial as curing kids. “It would be nice if this sort of work were more valued in academia and top journals,” Teachey says. Berish Rubin of Fordham University couldn’t agree more. He discovered a treatment for a rare, often fatal genetic disease, familial dysautonomia. Given the choice of publishing in a top journal, which would have taken months, or in a lesser one immediately, he went with the latter. “Do I regret it?” Rubin asks. “Part of me does, because I’m used to publishing in more highly ranked journals, and it’s hurt me in getting NIH grants. But we had to weigh that against getting the information out and saving children’s lives.”

Let’s boil down Begley’s thesis here. The cool basic science stuff appeared in the top hematology journal, but the first report of the application of that basic science to treat patients appeared in only in the 13th-ranked journal. Obviously journals value basic science over clinical science! Those bastards! They don’t care about curing children! To them curing kids is “trivial.”

Begley seems blissfully ignorant of two things: How journal rankings work and the fact that different scientific journals fill different niches. Ranking of scientific and medical journals are in general based on something called the “impact factor” (IF). The IF is often used as a proxy for the importance of a journal in its field, with the higher the IF number the better. Although the algorithm that determines the IF is proprietary, it is calculated based on a two-year period and is based on the average number of citations in a year given to papers in a journal published during the two preceding years. In general, higher IF journals are viewed as more desirable to publish in. Thus, what makes the IF a proxy for a journal’s importance is the presumption that more citations of its articles equates to more interesting science and novel findings that more scientists cite. This may or may not be a valid assumption. Finally, one aspect of the IF is that journals designed for a more general readership tend to have higher IFs than subspecialty journals. In other words, Cell, Nature, and Science have high IFs. Within a field, Cancer Research or Clinical Cancer Research has a higher IF than Breast Cancer Treatment and Research.

Here’s where niches come in. Different journals have different niches. For example, the example mentioned by Begley, Blood, is not primarily a clinical journal. True, it does publish some clinical trial results, but its main emphasis is clearly on basic and translational research. It’s simply silly to get all worked up because Blood didn’t publish a small pilot study with six patients and conclude that journals don’t value clinical research. They do, just not journals that are primarily basic and translational science journals. Publishing clinical trials is not their raîson d’être. However, I think I know why Teachey’s second study was not viewed as being interesting as his first study. A mouse model that provided proof of principle that rapamycin can treat a rare blood condition, complete with scientific mechanism is indeed interesting for a wide range of researchers, basic science, translational, and clinical. A small pilot study tends to be less so.

Let’s look at Teachey’s BJH article. It’s a nice study, but clearly a very preliminary pilot study. Such pilot studies do not generally make it into the top tier journals, no matter how interesting the science is, because, well, they’re so preliminary and small (and thus could be wrong). Begley seems to think that not considering such studies as being top tier is akin to considering curing children of deadly diseases to be “trivial.” She also seems to think that not placing such a study in a top tier journal will fatally delay the application of such cures. However, no treatment is going to be approved on the basis of such a small pilot study; at a minimum, a larger phase II study would still have to be done, and that is the study that would be likely to show up in the higher tier journals, particularly if it was well-designed to include some cool correlative science studies that confirmed the mechanism in humans. In either case, Begley doesn’t make a good case that Teachey’s study’s not being published in Blood has somehow delayed the fruits of his research from reaching sick children. Much work still needs to be done before Teachey’s discovery becomes common practice.

Begley is closer to the mark (albeit still exaggerating) when she discusses how the importance of IFs can distort how and where scientists decide to publish. In brief, scientists tend to want to publish in the highest impact journals because articles in such journals are viewed as being more meaningful than in lesser journals. Where she goes off the mark is her assumption that it is those horrible basic scientists, with their insistence on knowing molecular mechanisms that keep clinical research in the ghetto of lower tier journals, are somehow keeping teh curez from teh sick babiez!!!! (Sorry. I’ll try to restrain myself from using LOL Cat-speak.) For instance, after lionizing Berish Rubin for having chosen to publish in the lesser journal rather than keep teh curez from teh babiez (oops, I did it again), she castigates an unnamed scientist:

Not all scientists put career second. One researcher recently discovered a genetic mutation common in European Jews. He has enough to publish in a lower-tier journal but is holding out for a top one, which means identifying the physiological pathway by which the mutation leads to disease. Result: at least two more years before genetic counselors know about the mutation and can test would-be parents and fetuses for it.

This is so vague as to be useless. “A genetic mutation common in European Jews”? What mutation? What is the significance of this mutation in carriers? To what disease or defect does it predispose? Begley doesn’t say. I realize she’s probably doing so in order not to give a huge clue as to who this evil careerist scientist who doesn’t care about patients may be, but without that information I have no idea whether this discovery is so potentially important to patients that delaying its publication until he figures out how this mutation does its dirty work is unconscionable. Clearly Begley seems to think so, but there’s nowhere near enough information in her column for me to hazard even a wild guess. Validating a new genetic mutation as a risk factor to the point of developing a reliable screening test for it is an incredibly difficult task, requiring epidemiology and clinical trials to confirm the basic science. The process of FDA approval for a new genetic test is not trivial. In any case, all we’re left with is a bunch of self-serving anecdotes to support her dislike of basic science.

There’s a deeper problem, though, with Begley’s essay. Having both an MD and a PhD and doing translational research myself, I think I have some perspective on this. The problem is that Begley seems to buy into the Magic Bullet model of scientific progress, a.k.a. the “big breakthrough.” While it’s true that big breakthroughs so sometimes occur (think Gleevec, for instance), the vast majority of science and scientific medicine is incremental, each new advance being built upon prior advances. It’s also very frequently full of false starts, dead ends, and research that looks promising at first and then peters out. If a big breakthrough could be conjured by willpower and risky research, we’d have the cure for cancer by now. These disease processes are incredibly complex, and sometimes the research to understand and treat them are even more complex.

But it’s more than that. Begley may have a point when she mentions that clinical researchers are often stymied when their their grants are reviewed by basic scientists, but I can tell you that this goes both ways. If you’re a basic scientist and want to get funded by the NIH, your project had better have a practical application to human disease. Just studying an interesting biochemical reaction or a fascinating gene because it is fascinating science is not enough. If you can’t show how it will result in progress towards a treatment for a disease, it is incredibly unlikely that your grant will be funded by the NIH.

Discoveries can’t be mandated or dictated, no matter how much Begley seems to think that just changing the emphasis of the NIH to more translational research or funding riskier projects would do it. Again, don’t get me wrong; there’s no doubt that the NIH has often been far too conservative in what grants it funds, and that risk averseness becomes worse the tighter its budget gets and the tighter the paylines it can fund. However, the NIH is also the steward of taxpayer money. Fund too many risky projects, and it is likely that nothing will come of the vast majority of them. As in everything, there needs to be balance. Ideally there should be a portfolio of research that is balanced between the solid, but not radical, science that is likely to reliably lead to incremental progress and riskier projects with a higher potential payoff but a much higher risk of producing nothing.

This is a perfect point to segue to yesterday’s NYT article, which takes a different perspective on the same old complaint but still manages to fail to avoid many of the same pitfalls.

The New York Times: Are researchers “playing it too safe”?

The NYT article, written by Gina Kolata, comes at the question of how the NIH funds research and what research is done in our academic medical centers from a different viewpoint than Begley. Kolata argues that the system is broken but sees the answer not in the sort of clinical research and comparative effectiveness research beloved by Begley, which in her ideal world would rapidly translate basic science findings into treatments and new tests, but in another mantra that seems to be going around, namely “high impact” research, which, we’re told piously and solemnly, we are not doing nearly enough of because the NIH doesn’t encourage it or fund it. Indeed, whenever I see an article entitled something like Grant System Leads Cancer Researchers to Play It Safe, I know exactly how it’s going to start, and this article follows the playbook almost exactly:

Among the recent research grants awarded by the National Cancer Institute is one for a study asking whether people who are especially responsive to good-tasting food have the most difficulty staying on a diet. Another study will assess a Web-based program that encourages families to choose more healthful foods.

Many other grants involve biological research unlikely to break new ground. For example, one project asks whether a laboratory discovery involving colon cancer also applies to breast cancer. But even if it does apply, there is no treatment yet that exploits it.

The cancer institute has spent $105 billion since President Richard M. Nixon declared war on the disease in 1971. The American Cancer Society, the largest private financer of cancer research, has spent about $3.4 billion on research grants since 1946.

Yet the fight against cancer is going slower than most had hoped, with only small changes in the death rate in the almost 40 years since it began.

Again, I’m not saying that it’s ridiculous to question why all that money and all these breakthroughs have not made a greater impact on cancer mortality than they have. On the other hand, there is a false premise in the whole question. Specifically, there is an unspoken assumption that “riskier” research is inherently more likely to result in “breakthroughs” than the more incremental model of building on previous results. The expectation is that all of that money should have produced “blockbusters” that make huge dents in cancer mortality. Unfortunately, science and biology are hard. They conspire to frustrate even the most ambitious wishes for cures, whether they come from scientists, politicians, or journalists reworking a tired old script yet another time. It also disturbs me to see the clearly derogatory tone directed at the study of diet and health, which, quite frankly, seems at odds with the argument that the NIH doesn’t spend enough on research into diet, exercise, and prevention, which by its very nature tends to consist of studies of this sort which are highly unlikely to produce anything other than incremental results leading to incremental improvements to our knowledge of how to use diet as a tool to prevent disease. Quite frankly, research of this nature isn’t viewed as being as “sexy” as the sort of research Kolata thinks we should be doing more of. After all, it doesn’t involve new genes, new proteins, or novel science, but rather deals with the application of what we have known for decades.

Let’s get to the crux of the article:

“These grants are not silly, but they are only likely to produce incremental progress,” said Dr. Robert C. Young, chancellor at Fox Chase Cancer Center in Philadelphia and chairman of the Board of Scientific Advisors, an independent group that makes recommendations to the cancer institute.

Again, note the contempt for such projects. “These grants are not silly”? Talk about damning with faint praise! But I digress. Let’s get back on track:

The institute’s reviewers choose such projects because, with too little money to finance most proposals, they are timid about taking chances on ones that might not succeed. The problem, Dr. Young and others say, is that projects that could make a major difference in cancer prevention and treatment are all too often crowded out because they are too uncertain. In fact, it has become lore among cancer researchers that some game-changing discoveries involved projects deemed too unlikely to succeed and were therefore denied federal grants, forcing researchers to struggle mightily to continue.

Here we go again. Everything old is new again. This is the very same complaint that pops up periodically. Again, I’m not saying that it doesn’t have merit, only that it tends to be made in the absence of any hard evidence that (1) innovative ideas don’t eventually get funded and (2) that funding “riskier” ideas will inevitably lead to more home runs (more on this later). Also, very conveniently, this sort of complaint always seems to pop up the most in lean times. Indeed, I remember back when I was a graduate student in the early 1990s (when Tim was in grade school). That is the last time the funding situation got as bad as it has been at the NIH for the last few years, and I remember reading articles very similar to this one. Inevitably, tight fiscal times appear to lead to a sort of funding conservatism for exactly the reason above: The NIH doesn’t want to risk precious grant funds on projects that are too risky.

As evidence of this problem, does Kolata dig into the grants system and try to put together data showing that conservative funding policies are leading to the shutting out of game-changing research? What do you think? Of course not! Instead she relies, just as Begley did, on anecdotes from scientists who produced great work but were not funded initially. First up in the anecdote parade is Dennis Slamon. Personally, I admire Dr. Slamon’s work greatly. Through science and determination, he truly did change breast cancer therapy, and for the better. Specifically, he is the person who developed trastuzumab (trade name: Herceptin); i.e., a humanized monoclonal antibody against the HER-2/neu oncoprotein, which was discovered by Robert Weinberg’s group in 1984. HER-2 in general portends poorer prognosis and is generally a marker for more aggressive cancers with lower survival rates with conventional therapy. The reason is that HER-2 encodes a cell surface receptor that is a member of the epidermal growth factor receptor family. When it is amplified, as it is in approximately 22% of human breast cancers, its activity can result in increased cell proliferation, cell motility, tumor invasiveness, a higher likelihood of regional and distant metastases, accelerated angiogenesis, and reduced apoptosis. All of these are bad things. Consistent with this biological effect, HER-2 amplification, the presence of HER-2 amplification correlates with nastier appearing tumors on histology, decreased disease-free survival, increased metastasis, and decreased overall survival. Indeed, for overall survival and disease-free survival, the relative risk of death for women with HER-2-positive cancer versus HER-2-negative cancer is in the 1.8 to 2.7 range. In other words, women with HER-2-positive breast cancer are more than two times more likely to recur and die of their disaease.

It is also true that the addition of Herceptin, which blocks the HER-2 receptor, to standard chemotherapy of HER-2-containing cancers improves the prognosis in women with such cancers, although Kolata overstates how much it does so. For all its usefulness, Herceptin is not a magic bullet; it is not a cure. Indeed, the original phase III trial reported in the New England Journal of Medicine in 2001 found that in patients with HER-2-postive metastatic breast cancer, the addition of trastuzumab to conventional chemotherapy resulted in a longer time to disease progression (median, 7.4 versus 4.6 months; p < .001), a higher response rate (50% versus 32%; p < .001), a longer duration of response (median, 9.1 versus 6.1 months; p < .001), and an increase in median survival from 20.3 months to 25.1 months (p=0.008). Consistent with this, the addition of Herceptin resulted in a 20% lower risk for death. These are all very good things, but not a cure.

In Kolata’s article, Slamon complains that he had difficulty getting the NIH to fund his studies and that it took a grant from Revlon for him to continue his research. His story is a bit too convenient, as a perusal of the Internet using my mad Google skillz found that the story is a bit more complex than what Kolata tells. Indeed, a made-for-TV movie about Slamon and Herceptin aired on Lifetime in 2008 called Living Proof, which was based on a book by Robert Bazell called HER-2: The Making of Herceptin, A Revolutionary Treatment for Breast Cancer, tells a more complex story than just the NIH’s not funding his research. For example, this passage from a review of Bazell’s book is illuminating:

Then, another stroke of dumb luck occurred in 1986. Ullrich accidentally met Dennis Slamon in a Denver airport. Slamon, a practicing oncologist at UCLA’s Jonsson Cancer Center, is a dogged and devoted cancer researcher. Throughout the next 2 years, Slamon and Ullrich pursued the hypothesis that HER-2 neu played a role in the growth of breast and ovarian cancers. By 1987, they published their results, which suggested that cancers overexpressing HER-2 are more likely to recur and spread more quickly.

But could their work be reproduced and their conclusion be confirmed? More delays ensued as their colleagues failed to reproduce their experiment. These delays were dumb bad luck. Two long years later, Slamon and Ullrich proved that the failure to reproduce their work was due to the “use of contaminated chemicals, faulty techniques, and idiotic mistakes by the laboratories conducting the experiments.”

By 1988, Slamon and Ullrich looked to Genentech for support to take their promising experiment to the next level of development. Support was not forthcoming. Genentech was no longer focused on cancer drug development. Their oncology staff had been disbanded following the unsuccessful interferon-alfa trials.

Herceptin’s development languished until more dumb luck occurred. In late 1989, the mother of a senior Genentech vice president was diagnosed with breast cancer. “Just like that, one man flipped the switch on HER-2” and convinced his colleagues that HER-2 was worth Genentech’s investment. Again, and not for the last time, an unexpected player would rescue Herceptin.

It’s just sloppy reporting not mention this background and simply buy into the image of the “brave maverick doctor” whose work wasn’t appreciated by the NIH. Why did the NIH decide not to fund Slamon’s work? Without seeing the pink sheets (the summary statement that all applicants for a grant receive containing reviewers’ evaluations of the grant appplication), it’s impossible to know for sure. In the absence of having read Bazell’s book, however, I can still speculate a bit, at the risk of looking foolish. If, for example, Slamon and Ullrich (the latter of which was a Genentech scientist) had submitted their HER-2 grants around the time other researchers were having difficulty duplicating their results, the reviewers at the NIH study section responsible for reviewing Slamon’s grants would almost certainly have known about it. Moreover, if a pharmaceutical company had been developing Herceptin, it is possible that reviewers would have been less enthusiastic about it, wondering quite reasonably why the NIH should fund the development of this drug if Genentech would no longer do so. As much as I love a story of a scientific maverick overcoming the hidebound system (why else would Judah Folkman be one of my scientific heroes?), I’m not sure that Slamon’s case represents as pure an example of such a scientist as this article would have you believe. (Maybe I should get a copy of Bazell’s book and read it.)

It’s also rather odd that Kolata would focus on Slamon first. It is true that Herceptin represents the first successful example of a molecularly targeted therapy for breast cancer. (Well, not quite. One could equally well argue that the anti-estrogen drug Tamoxifen, which blocks the proliferative effect of estrogen due to its binding with its receptor, were the first.) However, only 20% of women with breast cancer can be expected to benefit from it, and it is an incredibly expensive drug, and it is not without its toxicities, the worst of which is toxicity to the heart, which precludes its use in many women with heart disease. In any event, the high cost of Herceptin (several thousand dollars a month, with the usual duration of treatment for breast cancer being one year). Indeed, when research suggested a benefit for extending the use of Herceptin for adjuvant therapy against early stage tumors, in contrast to its initial uses in more advanced tumors, some European governments balked at the cost.

Two more anecdotes follow, which are more interesting but less convincing, namely because we don’t have the benefit of history to tell us that the investigator was definitely on to something, as we do with Dennis Slamon. Because one of the researchers whose anecdote is told didn’t even bother to try to apply to the NIH for funding because he figured it wouldn’t be fundable. Whether he’s right or wrong, who can tell? Instead, I’ll concentrate on the other anecdote, that of Eileen K. Jaffe of Fox Chase Cancer Center:

When Dr. Jaffe stumbled upon results that went against textbook explanations, suggesting that it might be possible to find an entirely new class of drugs that could disable proteins that fuel cancer cells. Now she wants to find chemicals that might be developed into such drugs.

But her grant proposal was rejected out of hand by the institutes of health, not even discussed by a review panel. She had no preliminary data showing that the idea was likely to work, something reviewers always want to see, and the idea was just too unprecedented.

And this is not the least bit unreasonable, if indeed her idea was that novel. Indeed, even Dr. Jaffe acknowledges this:

Dr. Jaffe is just conceiving her project; it is much to soon to know whether it will result in a revolutionary drug. And even if she does find potential new drugs, it is not clear that they will be effective. Most new ideas are difficult to prove, and most potential new drugs fail.

So Dr. Jaffe was not entirely surprised when her grant application to look for such cancer drugs was summarily rejected.

“They said I don’t have preliminary results,” she said. “Of course I don’t. I need the grant money to get them.”

Actually, that’s not quite true. It is, for example, close to true for the gold standard of NIH grants, the R01. These grants generally provide around $150,000 to $250,000 a year for three to five years to fund a project, and it is usually about these grants that most of the complaints are made. That’s because reviewers do tend to be pretty conservative about such grants. The reason is that these grants provide a lot of money for a lot of years and can be renewed at the end of each grant through a process known as competitive renewal, in which the investigator reports on the progress made in the previous grant period and proposes where he wants to go over the next five years. In other words, an R01 is a huge commitment, and, again not unreasonably, reviewers want to see a lot of preliminary data to suggest that the project is feasible and likely to produce results that improve our understanding of a disease and lead to strategies for therapy. It’s exactly the sort of grant mechanism designed to look at a question like this one described in the article:

In the study asking whether a molecular pathway that spurs the growth of colon cancer cells also encourages the growth of breast cancer cells, the principal investigator ultimately wants to find a safe drug to prevent breast cancer. She received a typical-size grant of a little more than $1 million for the five-year study.

The plan, said the investigator, Louise R. Howe, an associate research professor at Weill Cornell Medical College, is first to confirm her hypothesis about the pathway in breast cancer cells. But even if it is correct, the much harder research would lie ahead because no drugs exist to block the pathway, and even if they did, there are no assurances that they would be safe.

I actually agree to some extent with Kolata’s thesis, namely that much of the grant funding at the NIH is too conservative and that the NIH should find a way to fund more innovative and “risky” grants. Indeed, I tend to approve of various initiatives to fund innovative research, such as “pioneer awards,” which are designed to fund research examining “ideas that have the potential for high impact but may be too novel, span too diverse a range of disciplines or be at a stage too early to fare well in the traditional peer review process,” and the so-called “transformative R01 grants” for “proposing exceptionally innovative, high risk, original and/or unconventional research with the potential to create or overturn fundamental paradigms.”

One difference between me and those calling for reform of the NIH grant process is that I openly admit that I don’t have any data to support my bias that the current system is too conservative and that a way needs to be found to fund more innovative (or at least different) research. What bothers me is that neither Begley, Kolata, nor any of the scientists interviewed by both of them seem able to present any hard data or science to support their bias either. They believe that funding more risky projects will result in better payoffs than sticking with the slow march of incremental science. They have anecdotes of scientists whose ideas were later found to be validated and potentially game-changing who couldn’t get NIH funding, but how often does this really happen? The vast majority of “wild” ideas are considered “wild” precisely because they are new and there is no good support for them. Once evidence accumulates for them, they are no longer considered quite so “wild.” More importantly, we are looking through what we doctors like to call the “retrospectoscope,” which, as we say, always provides 20-20 vision. We know today that the scientists whose anecdotes of woe describing the depradations of the NIH were indeed onto something. How many more proposed ideas that seemed innovative at the time but ultimately went nowhere?

We don’t know.

We also don’t know exactly how the NIH would choose between so many “risky” projects. Sanjay Srivastava, writing about the NYT article, asks an excellent question:

The practical problem is that we would have to find some way to choose among high-risk studies. The problem everybody is pointing to is that in the current system, scientists have to present preliminary studies, stick to incremental variations on well-established paradigms, reassure grant panels that their proposal is going to pay off, etc. Suppose we move away from that… how would you choose amongst all the riskier proposals?

People like to point to historical breakthroughs that never would have been funded by a play-it-safe NCI. But it may be a mistake to believe those studies would have been funded by a take-a-risk NCI, because we have the benefit of hindsight and a great deal of forgetting. Before the research was carried out — i.e., at the time it would have been a grant proposal — every one of those would-be-breakthrough proposals would have looked just as promising as a dozen of their contemporaries that turned out to be dead-ends and are now lost to history. So it’s not at all clear that all of those breakthroughs would have been funded within a system that took bigger risks, because they would have been competing against an even larger pool of equally (un)promising high-risk ideas.

In other words, a lot of the current crop of criticisms of the way the NIH selects grants of the sort put forth by the NYT rest on a large measure of selective memory and confirmation bias. Science that is successful is remembered; proposals that go nowhere are lost to the mists of time. Scientists whose work was later validated after the NIH didn’t fund it are remembered and make good press copy. The many more scientists whose work wasn’t funded and went nowhere aren’t.

Moreover, it’s easy to make grand claims that all that nasty preliminary data isn’t necessary, that we should fund “plausible” studies that sound promising. However, it is the preliminary data supporting them that turn studies from speculative to plausible. Without that data, the possibilities are virtually endless, with little to distinguish truly plausible proposals from interesting but implausible ideas.

Of course, given that the conservatism of the NIH grant process always tends to be more of a complaint when funding is tight, I could envision a way of testing the hypothesis that funding more “risky” research results in more breakthroughs. It would be imperfect, but it might provide enough evidence to justify further exploration of this idea. Specifically, I’m referring to the period from fiscal year 1998 to 2003, during which the NIH budget, thanks to bipartisan support, doubled. The NIH could look at funding data from that period and ask some questions:

  1. Were more “risky” grants funded?
  2. What was the result of those grants?
  3. Were “riskier” grants more or less likely to result in new treatments that impacted the survival of cancer patients?

I realize that it would be difficult to come up with measurements to answer such questions that would be truly objective, but even imperfect data on this score would be better than what we have now, which is in essence no data. Or, at least if there is such data, I couldn’t find it. Perhaps that’s because the assumption that more “innovative” research must be needed because we haven’t impacted cancer survival rates as much as we think we should have during the last 37 years, which means we need more innovative research. Circular reasoning at its finest, at least without some hard data. Given that what happened in response to the doubling of the NIH budget was in essence a spending spree that attracted more applications than ever to the NIH and spurred more building of research facilities by universities, I rather suspect that such a study would not show what proponents of altering how the NIH decides what research to fund would want it to show.

The symptom or the disease?

Blogger Mike the Mad Biologist has asked on two occasions, first in response to the NEWSWEEK article and then in response to this NYT article: Are these critics mistaking the symptom for the disease? As Mike points out, the problem goes beyond funding levels. Rather, it is incentives. It is where the money goes and what is funded. On that score, the NIH is profoundly schizophrenic these days. The sorts of promising high risk proposals do not in general come from large, multi-institutional, collaborative groups. There are too many interests, and such groups tend to have too much at stake to take many risks. The very sort of researcher who will propose the risky projects that all these reformers is the small, independent researcher funded by an R01. Yet, what has the NIH been shifting its funds to lately??

Big science. Large projects. Walter Boron put it well in a commentary in Physiology three years ago:

But there is still one more piece to the pay-line puzzle: the allocation of NIH dollars, sometimes mandated by Congress, and often following the advice of committees of independent investigators. The fraction of the NIH budget devoted to research by independent investigators (Table 1) steadily fell from 1998 to 2003. Conversely, spending for other programs including “big science”—the sequencing of genomes, clinical trials, and other costly and lengthy projects—steadily rose. Where does one draw the line between shifting funds to big science and yet maintaining a healthy portfolio of independent-investigator research? When the NIH is afloat in money (e.g., pay lines 25th percentile for research by independent investigators), such a shift may make sense. An example is the sequencing of the human genome, which has been invaluable. But what about sequencing the squid genome, which I personally would love to see? Before addressing this question, let us examine the value of investigator-initiated research and the dangers of interrupting it for even a couple of years.

The scientific engine that drives translational research—and that drives big science as well—is the independent investigator. It is the independent investigator who trains the next generation of researchers. Moreover, discoveries almost always come about when bright independent investigators stumble over unexpected findings and then sort them out. Such stumbling is unpredictable. The bigger the discovery, the more unpredictable. As unnerving as it may seem, the best way to invest in discovery is to fund the best independent investigators and turn them loose to stumble.

Of course, Dr. Boron doesn’t really have any data to support his thesis, either, but he does illustrate the conflict between “big science” and the innovation that NIH reformers are pushing for. Also, if anecdotal evidence counts, then let me place my own personal anecdote on the table. One of the two projects I’m working on now came about from a completely serendipitous discovery on the part of my collaborator, who pursued a completely unexpected observation. Based on that discovery, the NIH has funded two R01s, and the ASCO Foundation has funded me. Based on my anecdote, the NIH must fund risky and innovative research.

Yes, two can play at the anecdote game.

All snarkiness aside, though. One problem is that the public has a rather distorted view of how science works. Science generally does result in incremental progress. Sometimes, there are even periods of stagnation, during which, or so it seems, very little is discovered and few advances made. Breakthroughs, such as the discovery of HER-2 are much less common than the gradual accumulation of knowledge and understanding that builds on what has been done before. Indeed, even a breakthrough like HER-2 was built on what came before, as Dennis Slamon’s work could not have occurred were it not for Robert Weinberg, who discovered HER-2 in the first place.

I think that the issue is better put by Wafik el-Deiry, the physician-scientist who discovered p21WAF1/CIP1 a very important cell cycle regulator whose expression p53 activates. What I consider to be cool about Dr. el-Deiry is that he’s on Twitter, where he Tweeted in response to the NYT article:

The major advances in cancer research have come from basic research without expectations of immediate impact on patients’ lives

And followed up with this Tweet:

Yes, I worry about change that may set us back by putting less value on basic science & more on hi risk pie in the sky

I agree and share Dr. el-Deiry’s concern. Oncogenes, tumor suppressor genes, HER-2, intracellular signaling molecules, all of these and more were discovered by basic scientists working because they simply wanted to know how cells work and what goes wrong to turn them cancerous. What I worry about is that, in the rush to fund more “innovative” and “translational” research, basic science will be left out. Why this worries me is that, without basic science, there can be no translational science. Translational research depends upon a constant flow of new observations and new discoveries in basic science.

More importantly, it can’t be predicted where those new discoveries will come from. Sometimes they come right out of left field, like the aforementioned project I’m working on now, which resulted from a serendipitous discovery by my collaborator and has the potential to result in a great new treatment for not just breast cancer but melanoma as well. It was not the sort of discovery that could have been foretold, and it may never have been noticed if it hadn’t been for a basic scientist following curiosity where it led. In any case, those who advocate for funding more “risky” research must have a lot of faith in current scientists who serve on NIH study sections to identify what proposals are truly innovative. I’m not sure I share that faith. In fact, I’m sure that I don’t. At the risk of belaboring the point, I will repeat that many breakthroughs could not have been identified in a research proposal beforehand. Moreover, it can’t be emphasized enough that translational research depends upon a steady stream of interesting science from the laboratories of basic scientists. Dry up that stream, and translational research will slow to a crawl.

That’s why, going back to the baseball-inspired title of this post, I would argue that no radical overhaul of how the NIH distributes grants is necessary. Rather, the portfolio of research funded by the NIH needs to be better balanced than it is now. Just like a good hitting lineup, the NIH research portfolio needs the right mix of “hitters,” including the high average hitters who tend to hit nearly all base hits (“safer science” proposals); the clever utility hitters, who make things happen by putting the ball in play a lot by various methods, such as bunting, little bloopers, line drives (the more eclectic proposals who can span multiple disciplines); and the sluggers who can hit the ball out of the park but tend to strike out a lot (high risk, high payoff proposals). Finding the right mix is difficult, and there will naturally be disagreement over what the “sweet spot” is. Indeed, what constitutes that “sweet spot” could very well change over time. Right now, it’s probably true that the NIH has a lineup that’s too full of bunters and base hit hitters, although we shouldn’t forget that a lineup full of such hitters can still score a lot of runs. It’s just that a slugger or two who can hit the occasional grand slam can result in even more runs, if placed in the correct place in the lineup. The risk is that the NIH has to be careful not to end up with a lineup of home run hitters who strike out a lot without getting anyone on.

I fear that’s what we’ll end up with, though. I also have to wonder whether, if critics got their way and the NIH were to put a lot more money into funding “high risk studies,” there would be so many failures that the press would suddenly start saying, as Sanjay put it, “Grant System Funds Too Many Dead Ends.” And, no doubt, Sharon Begley would still be be complaining about the NIH not funding enough clinical research and that it’s still devaluing clinical research.

ADDENDUM:

An excellent discussion of the NYT article can be found here (and is well worth reading in its entirety). In it, Jim Hu did something I should have done, namely check the CRISP database in addition to PubMed. A couple of key points follow about the examples cited in the NYT article.

Regarding Dennis Slamon:

I hate to criticize Dennis Slamon, because the HER2 to Herceptin story is a great one. But the image one gets of his research program being saved by a friend from Revlon while the NCI ignored him isn’t consistent with what you get when you search for his grants in the CRISP database. Slamon got an NCI grant in 1984 to work on “oncogenes in physiologic and pathologic states”. Two NCI grants are cited in the 1987 Science paper showing HER-2 amplification in breast cancer (one was probably for the collaborator’s lab), and he’s been pretty continuously funded by NCI since then. So I’d love to know what this story applies to.

Me too. Regarding Ellen Jaffe:

Eileen Jaffe has studied the enzymology of porphobilinogen synthase under a 20-year multiply renewed grant from the National Institutes for Environmental and Health Sciences. Recently, she’s been working on an idea called morpheeins, which she’s patented as the basis for drug discovery. I have no idea what was in the grant, but what I see doesn’t scream “missed opportunity to cure cancer” at me.

Which was my thought, too, looking at her publication record. Finally, regarding Louise R. Howe’s studies on signaling and cancer:

The plan, said the investigator, Louise R. Howe, an associate research professor at Weill Cornell Medical College, is first to confirm her hypothesis about the pathway in breast cancer cells. But even if it is correct, the much harder research would lie ahead because no drugs exist to block the pathway, and even if they did, there are no assurances that they would be safe.

I have no idea what Kolata has against Dr. Howe’s project. The same could have been said about HER2 in 1987.

Or about any number of oncogenes and targeted therapies. Yikes! The same could be said about what I’m working on. Oh, no, that must mean I’m not sufficiently innovative for Kolata’s taste…

Shares

Author

Posted by David Gorski

Dr. Gorski's full information can be found here, along with information for patients. David H. Gorski, MD, PhD, FACS is a surgical oncologist at the Barbara Ann Karmanos Cancer Institute specializing in breast cancer surgery, where he also serves as the American College of Surgeons Committee on Cancer Liaison Physician as well as an Associate Professor of Surgery and member of the faculty of the Graduate Program in Cancer Biology at Wayne State University. If you are a potential patient and found this page through a Google search, please check out Dr. Gorski's biographical information, disclaimers regarding his writings, and notice to patients here.