Shares

Sharon Begley, the Science Editor for Newsweek, wrote about translational research in the latest issue, and the tone of the essay reminded me of Begley’s previous piece on comparative-effectiveness research. Being an MD/PhD student (just defended!) I am very interested in the process of communicating “from bench to bedside.” New to science as I may be, I found Begley’s arguments to be overly simplistic and short-sighted.

The essence of Begley’s argument is illustrated by her description of scientist David Teachey. Teachey was part of a group that reported a novel use of the immunosuppressive drug rapamycin in autoimmune lymphoproliferative syndrome (ALPS), which is characterized by a defect in normal cell death (apoptosis) of immune cells. A 2006 article in the journal Blood described the ameliorative effects of rapamycin on the gross, cellular, and molecular pathology in a mouse model of the disease. In 2009 in British Journal of Haematology (BJH), the group reported similar improvements in six human patients. Importantly, in Begley’s telling, Blood is “the top hematology journal” and BJH is “the 13th-ranked,” and publishing in a lower-ranked journal means the group will have more trouble getting NIH funding for future work. In Begley’s eyes, the translational research (“curing kids”) got shafted whereas the mechanistic studies (“the sexy science”) was honored.

When we say a journal is “highly ranked” we are generally referring to measures such as “impact factor,” a statistic reported by Thomson Scientific that equals number of times a journal’s articles were cited in a year divided by the number of the journal’s articles in that year. In 2007, Blood had an impact factor of 10.9 and BJH had 4.5. Ideally, this statistic represents the journal’s contribution to the professional discourse that guides science. More citations of Blood articles than BJH articles presumably means that Blood articles are reporting more novel findings and provoking more further investigation than BJH articles. Therefore it is reasonable for grant reviewers and tenure committees to consider a researcher’s publication record in high-impact journals as a sign that she is contributing important work to the scientific community. Of course, ranking journals by such statistics is like ranking colleges by student SAT scores and faculty Nobel prizes; it is useful but should hardly be the only factor considered.

Begley is right in pointing out that the top priority of scientific journals is to publish exciting science—that is their reason for being—but she is wrong to suggest that clinical studies are ignored by the system. A series of experiments indicating a novel use for a drug and elucidating its possible mechanism (the first Teachey report) is exciting science. A small, uncontrolled trial (the second Teachey report) is less exciting, though certainly also important for progress towards a new standard of therapy. And, appropriately, these latter data were published in a peer-reviewed, well-regarded journal, albeit not quite as well-regarded as the journal that published the earlier results. Many journals exist, and they fill different niches. There are journals that strive to publish only dramatic or controversial findings, and there are other journals that will publish the incremental steps and independent replications that are less sexy but still crucial to the progress of science. Many of the so-called “top” journals (like Blood) publish primarily basic science as opposed to clinical trials, but there are plenty of good journals with a clinical focus (NEJM, impact factor 52.6; JAMA, 25.5). It is simplistic for Begley to suggest that a journal ranked a few steps lower in a proprietary, abitrary ranking system is considered altogether inferior. This part of the system is not as broken as Begley implies with her sarcasm about “something as trivial as curing kids.” Kids would not be cured without the animal studies, either.

Begley is more on-target, in my opinion, when she worries that researchers’ jobs and funding may be too dependent on the ranks of journals in which they publish. If translational research cannot get published in high-impact journals, and if NIH grants are given based on publishing record in high-impact journals, then the system indeed “slows the search for cures” as Begley’s subtitle claims. (Although it is perhaps misleading to say “academia” is at fault, since academia will do whatever NIH says is required for grant money.) However, all I learned from Begley’s essay about this possible problem is that it may have affected the four researchers she describes. Some data to back up the anecdotes would be nice. Or, Begley could have quoted some journal editors and grant reviewers on how they evaluate submissions, instead of relying on four one-sided stories to prove that they do not value translational research. 

Most importantly to me, though, I worry that Begley’s essay promotes a misunderstanding of the process of science, which is slow and winding by necessity. Positive results, even dramatic ones, can be elusive. “In science,” my research mentor told me, “if it can’t be replicated, then it doesn’t exist.” A novel, unexpected result is a “finding” and should not be called a “discovery” until after it is shown to be robust, preferably through verfication by independent researchers. Exploring the mechanism of a proposed therapy is not idle intellectual play, and requiring grant applicants to justify research hypotheses is not meaningless red tape. Rigorous science is the duty of any investigator who would use taxpayer money to test a powerful drug on human subjects. Begley chafes at a slow pace that may be delaying the adoption of new cures for sick kids, but that slow pace may have saved many lives by weeding out the promising findings that didn’t hold up to closer scrutiny. Rushing novel therapies to patients based on interesting hypotheses and preliminary data may seem emotionally appealing (and policies exist to allow such experimental treatment for ill patients with no hope otherwise), but think of the outcry if ALPS researchers accidentally killed children with a powerful immunosuppressant that had not been thoroughly studied. What a waste if genetic counselors began counseling couples right away about the new mutation Begley alludes to, if it is later shown by rigorous studies to be a harmless allelic variant. There is always a tradeoff between avoiding false negatives and avoiding false positives. In the face of the humbling complexity of human biology and the grave consequences of medical interventions, scientists are appropriate to be conservative about sending new therapies to the clinic.

Begley wants President Obama to pick a director for NIH who will move the institutes beyond “the same old incremental research that has produced too few cures,” but I would argue rather that we need a director who can expertly resist such short-sighted goading. Politicians and other non-scientists often want to shape the direction of research through funding or fiat; see Obama’s revival of the War on Cancer, the medical version of the War on Terror. The beauty and power of US science is that NIH grants are instead largely awarded to researchers who follow their data, wherever it leads. Basic scientists are allowed to be creative and curious even when their work has no immediately obvious practical application, because discovery is more like art than engineering. Translational research is valuable and should be supported, but we must not sacrifice the goose of basic science for the golden eggs of new cures.

Shares

Author

Posted by Tim Kreider

a med student blogging about integrative medicine on campus