Jul 23 2012
“Targeted therapy.” It’s the holy grail of cancer research these days. If you listen to its most vocal proponents, it’s the path towards “personalized medicine” that improves survival with much lower toxicity. With the advent of the revolution in genomics that has transformed cancer research over the last decade, including the petabytes of sequence and gene expression data that pour out of universities and research institutes, the promise of one day being able to a patient’s tumor, determining the specific derangements in genome and gene expression that drive its uncontrolled proliferation, and finding drugs to target these abnormalities seems more tantalizingly close than ever. Indeed, it seems so close that even dubious practitioners, such as Stanislaw Burzynski, have jumped on the bandwagon, co-opting the terms used by real oncologists and real cancer researchers to sell “personalized gene-targeted cancer therapy,” which in their hands are really no more than a parody of efforts to synthesize the enormous quantity of genomic data each patient’s tumor possesses and figure out how best to take advantage of it, a “personalized genomic therapy for dummies,” if you will.
That’s not to say that there aren’t roadblocks to realizing this vision. The problems to be overcome are substantial, and I’ve discussed them multiple times before. For example, just a couple of weeks ago I discussed an example of just what it takes to apply these new genomic techniques to an individual patient. The resources required are staggering, and, more problematic, there often aren’t any single “magic bullet” molecular pathways identified that can be targeted with existing drugs. The case I discussed was a fortunate man indeed in that such a pathway was identified, but most tumors are driven by many derangements in growth control, metabolism, migration, and the other hallmarks of malignancy described by Robert Weinberg. Worse, in many cases we don’t even have drugs that can attack many of the abnormalities that drive cancer progression. Then there’s the issue of tumor heterogeneity, which comes about because cancer is as good example of a disease as I can think of in which evolution due to natural selection results in incredible differences in the cancer cells in one part of the tumor compared to other parts of the tumor or in the tumor metastases. A “targeted” therapy that targets the genetic abnormalities in one part of the cancer might well fail to target the genetic abnormalities driving another part of the tumor.
These, and many other reasons, are why we haven’t “cured cancer” yet.
Of course, the main promise of the new targeted therapies, besides more precisely targeting the actual genetic abnormalities that drive the growth of specific tumors, is decreased toxicity. Whereas cytotoxic chemotherapy is like a blunderbuss that pummels and poisons cells, both normal and cancerous (it just poisons cancer cells more, which is why it works), “targeted” agents are frequently portrayed as “smart bombs” that attack only the cancer cells and produce little, if any, collateral damage. But is that true? If a new study just published online in the Journal of Clinical Oncology by a group at the Princess Margaret Hospital and University in Toronto, the answer is “probably not.” The study’s title pretty much sums up the findings: The Price We Pay for Progress: A Meta-Analysis of Harms of Newly Approved Anticancer Drugs. However, as you will see, the way the results of the study (Niraula et al) are being reported are not exactly in line with what the study found. In other words, this study finds something that seems unexpected, part of which is a bit unexpected, most of which is not.
A bit of background is in order, and the introduction of the paper is a good place to start:
Although virtually all new drugs are tested in early-phase clinical trials designed to evaluate toxicity and tolerance, these trials are small and can detect only common toxicities. Most large RCTs are primarily not designed to detect differences in quality of life or toxicity of treatment between the study arms and are usually inadequately powered to do so. Furthermore, some anticancer drugs are approved on the basis of smaller, nonrandomized, unblinded clinical trials5 in which detection of toxicity can be severely impaired. Rare but potentially serious toxicities may not be detected in RCTs owing to a selection of patients with high performance status and minimal comorbidity or because of relatively short follow-up. The quality of reporting toxicity in RCTs has been criticized following evidence that some toxicities may be detected but inadequately reported.3,6 Increased treatment-related mortality associatedwith bevacizumab,7 increased cardiovascular morbidity of aromatase inhibitors,8,9 and increased risk of cardiopulmonary arrest with cetuximab10 were all either undetected or unreported at the time of drug approval but were described postmarketing. Therefore, efforts to monitor for toxicities after completing pivotal phase III RCTs are essential.11 The US Food and Drug Administration has encouraged a systematic approach to the collection and reporting of toxicity.12 Until such measures are implemented, there will continue to be concerns about failure of detection and reporting of serious and fatal toxicities.
In essence, the approval process for cancer drugs suffer from the same limitations as the approval process for all drugs does. The phase III studies used to support approval, although large, are not large enough to identify uncommon complications and toxicities or to do a fine analysis of differences in quality of life. Unfortunately, these sorts of issues are often not identified until approved drugs go into wide usage. Post marketing surveillance studies are useful for this purpose, but it’s a bit hit or miss whether such studies are done. Because anticancer agents, even targeted therapies, are directed at critical signaling pathways that control cell replication, death, mobility, and other key processes common in all cells, even targeted therapies are likely to produce at least some toxicity. Basically, all drug approval is a delicate balancing act between doing enough studies with enough patients to ensure that the drug tested is safe and effective versus cost and the time it takes to do the necessary studies.
What the authors hypothesized is that the experimental arms of studies used to support the approval of new agents would be associated with a higher rate of significant toxicity. The specific toxicities studied included treatment-related deaths, any toxicity severe enough that treatment had to be halted, and grade 3 and 4 adverse events (AEs). Grade 3 AEs include “symptoms causing inability to perform usual social and functional activities,” while grade 4 AEs are potentially life-threatening and result in “symptoms causing inability to perform basic self-care functions or medical or operative intervention indicated to prevent permanent impairment, persistent disability, or death.” To test their hypothesis, the authors identified 38 RCTs used to support approval of a drug for a cancer indication between January 2000 and December 2010, specifically the phase III randomized clinical trials used to support FDA approval of these drugs. A meta-analysis was performed, and correlations were made between the odds ratios for the three primary endpoints (treatment-related deaths, AEs resulting in treatment discontinuation, and grade 3/4 AEs) associated with the new drug and hazard ratios for overall survival (OS) and progression-free survival (PFS) associated with these drugs.
What the authors found can be boiled down fairly easily. Compared with control groups, the odds ratio (OR) for treatment-associated toxic deaths was elevated (OR, 1.40; 95% CI, 1.15 to 1.70; P < .001). So were the odds of treatment discontinuation due to toxicity (OR, 1.33; 95% CI, 1.22 to 1.45, P < .001) and of grade 3 or 4 AEs (OR, 1.52; 95% CI, 1.35 to 1.71; P < .001). The most common toxicities were nonhematologic; i.e., they weren't the typical complications of chemotherapy, such as decreased white blood cell counts, anemia, and the like. Rather, they tended to be diarrhea, rashes and other skin problems, and neuropathy. Of course, peripheral neuropathy is a common complication of taxanes, so much so that many women I’ve operated on who underwent such therapy complain about numbness in their fingers, and one of the main drugs included in the meta-analysis was docetaxel (trade name Taxotere); so I’m not terribly surprised that this toxicity would come to the fore.
There are a number of other issues that were brought up with respect to this trial as well. The results I’ve just described were the overall results. The authors had several pre-planned subgroup analyses, and the results were a mixture of the aforementioned surprising and unsurprising results. The first surprising result is that the authors compared ” targeted drugs directed against a specific molecular target (specifically, the target population to be treated was selected by presence of a biomarker) versus drugs that are less specific (unselected target population, not based on presence of a biomarker). In this classification, docetaxel would be less specific because it is a general microtubule inhibitor that interferes with cell division. Its use is thus not chosen on the basis of an existing biomarker. In contrast, drugs like Tamoxifen or aromatase inhibitors would be considered “targeted” because they target estrogen signaling and are therefore only used in women whose cancers make the estrogen receptor. Similarly Herceptin would be considered a targeted agent because it targets HER2 and is only used in women whose cancers make this particular oncogene protein product.
The findings were a bit unexpected in that, targeted therapies had a higher OR for the three end points than the “less targeted” agents. However, I have a bit of a problem with this. Here’s a table describing the OR of the various endpoints for these two groups, with the 95% confidence intervals (CIs) in parentheses below each OR:
|Grade 3/4 AEs|
Specific targeted agents
(0.93 to 2.55)
(0.76 to 1.32)
(1.69 to 2.85)
Less specific agents
(0.96 to 1.60)
(1.22 to 1.60)
(1.18 to 1.66)
Notice that, for the specific targeted agents, the ORs are not statistically significantly different than 1.0 for toxic deaths and treatment discontinuation, but for the less specific agents, the ORs are statistically significantly greater than 1.0 for two out of the three endpoints. So, while it’s true that the risk of grade 3/4 AEs is higher for the targeted agents, the ORs for everything else are not. Personally, I’d call this a wash, or even potentially an indication that the less targeted therapies produce more AEs, as one would expect.
The next thing that the authors did that seemed to get lost in the news reports discussing this study. For instance:
The study showed the newer cancer drugs caused significantly more side effects, and more treatment-related deaths, than their older counterparts. “You’ve got to consider both efficacy and toxicity in the picture,” said Susan Ellenberg, who has studied drug side effects at the University of Pennsylvania’s Perelman School of Medicine in Philadelphia.
She said the analysis wasn’t so surprising, and isn’t “a big alarm.”
While it is correct that this isn’t a “big alarm,” reporting this study in a blanket fashion as showing these newer drugs as causing significantly more side effects is simplifying the results of this study way too much. The reason, I would posit, comes from another pre-planned subgroup analysis that the authors did:
Agents were classified into four subgroups: targeted agent alone versus placebo or best supportive care (subgroup A); targeted agent alone versus nontargeted systemic anticancer therapy (subgroup B); targeted agent combined with nontargeted systemic anticancer therapy versus nontargeted systemic anticancer therapy alone (subgroup C); and chemotherapeutic agents (subgroup D). We compared the rates of treatment-related death, treatment discontinuation, and grade 3/4 AEs in each of these subgroups. In a separate analysis, we pooled the odds of hematologic and nonhematologic toxicities in each of the subgroups.
Overall, the finding of this subgroup analysis for the three endpoints of the study was described in the discussion thusly by the authors:
Our analysis shows that most newly approved anticancer drugs are associated with increased odds of toxic death, treatment discontinuation, and severe AEs compared with the standard treatment received by controls. There was an overall increase in the odds of toxic death, treatment discontinuation, and grade 3/4 AEs with the use of new agents versus the control groups except when targeted agents alone were compared with chemotherapy in the control arm. When individual toxicities were evaluated independently, the odds were increased to a greater extent for nonhematologic than for hematologic toxicities with the use of experimental agents compared with the controls.
In other words, when these new agents go head-to-head with existing cancer therapies, the OR for serious AEs is not elevated in the experimental group, while in groups for which the new agent was either tested versus placebo or best supportive care or combined with nontargeted anticancer therapies or with cancer chemotherapy the addition of the new drugs increased the chance of significant toxicity. This is about as expected a result as I can think of. Of course a new agent is likely to produce more toxicity than placebo; that is, if it’s an active drug. Of course adding a new agent to existing agents, be they traditional chemotherapy or other nontargeted agents will increase toxicity! It’s a general principle that an active drug will be more toxic than placebo and that adding another drug to an existing mix of drugs is likely to produce more toxicity! In fact, I’d be surprised if the investigators didn’t find that.
That‘s the message that’s being lost in how this particular study is being reported. Now don’t get me wrong here. This study, admittedly, is disappointing. However, it’s disappointing not because it shows that the newer drugs are so much “more toxic,” but rather because it suggests that, contrary to the hope and the hype, the newer drugs are probably not less toxic than existing drugs. Actually, the study doesn’t exactly show that. The numbers of studies looking at head-to-head comparisons of new drugs with established treatment regimens was too small. It does, however, strongly suggest that newer agents at the very least are not that much less toxic than existing drugs and might be more. It also confirms the long-known dictum in pharmacology that the more drugs you give at the same time the higher the chance of interactions and toxicity. Newer, “targeted” drugs are not immune to that basic principle of pharmacology.
Another finding that seems surprising on the surface but really isn’t is that the observed toxicity of newly approved drugs did not correlate with how much they were able to increase OS or PFS. There is a misconception common even among doctors that the price we pay for more effective anticancer drugs is more toxicity, but that’s not necessarily true, and this study is simply more evidence to that effect. Another implication of this observation is simply to reinforce the point that we have to weigh toxicities very carefully versus incremental improvements in OS and PFS that result from adding new drugs to our cancer treatment regimens. There often comes a point where patients judge that the additional toxicity is not worth it for the amount of potential benefit that such drugs provide. In any case, it should not be surprising that toxicity and efficacy are largely independent properties of such drugs. As the authors themselves point out, efficacy is determined by the sensistivity of tumor cells and their microenvironment to the drug, while toxicity is more dependent on drug metabolism and clearance, as well as comorbidities, organ function, and concurrent medications, among other things.
What needs to be remembered about the new generation of more targeted therapies is that, unless they are used in patients whose tumors have the intended molecular target and are thus likely to be sensitive to the medication, the chances of benefit decline while the chances of toxicity increase. That is why, for instance, we do not give Herceptin to patients whose tumors do not make HER2 and do not use Tamoxifen or aromatase inhibitors in patients whose breast cancer is ER(-). For us to realize the maximum benefit at the lowest cost in terms of toxicity from these new medications, in addition to biomarkers that predict response (such as the aforementioned ER and HER2 markers) but biomarkers that also predict toxicity.
Finally, it’s important to note that, as incredible and copious as it is, all this genomic data doesn’t help us that much—yet. The reason is that we don’t yet know what to do with this “firehose” of a data stream, how to apply the data contained therein to guiding patient care in order to cure more patients and prolong the survival more of those who cannot be cured. I actually believe that one day we will have the tools and knowledge to do that, such that I envision a day when we will use tumor biopsies and blood to construct comprehensive genomic, transcriptomic, and proteomic profiles of individual patient tumors and use them to target the molecular derangements that drive them. However, I am under no illusion that reaching that day will be easy, cheap, or without other cost. Studies like Niraula et al keep us humble.
5 Responses to “Meet the new drugs, same as the old drugs?”