Articles

The wrong way to “open up” clinical trials

Science-based medicine rests on twin pillars that are utterly essential to the development of treatments that are safe and efficacious. Both of these pillars depend on science, but in different ways. The first of these is, of course, the basic science that provides the hypotheses to test about the mechanisms behind the diseases and malfunctions that plague the human body. This basic science suggests ways of either correcting or alleviating these malfunctions in order to alleviate symptoms and prevent morbidity and mortality and how to improve health to increase quality and quantity of life. Another critical aspect of basic science is that it also provides scientists with an estimate of the plausibility of various proposed interventions, treatments and cures designed to treat disease and improve health. For example, if a proposed remedy relies upon ideas that do not jibe with some of the most well-established laws in science, such as homeopathy, the concepts behind which violate multiple laws of physics and chemistry, it’s a very safe bet that that particular treatment will not work and that we should test something else. Of course, the raison d’être of this blog derives from the unfortunate fact that in today’s medicine this is not the case and we are wasting incredible amounts of time, money, and lost opportunities in order to pursue the scientific equivalent of fairy dust as though it represented a promising breakthrough that will save medicine, even though much of it is based on prescientific thinking and mysticism. Examples include homeopathy, reiki, therapeutic touch, acupuncture, and much of traditional Chinese medicine and Ayurveda, all of which have managed to attach themselves to medical academia like kudzu.

Of course, basic science alone is not enough. Humans are incredibly complex organisms, and what we consider to be an adequate understanding of disease won’t always result in an efficacious treatment, no matter how good the science is. Note that this is not the same thing as saying that utter implausibility from a scientific basis (as is the case with homeopathy) doesn’t mean a treatment won’t work. When a proposed treatment relies on claiming “memory” for water that doesn’t exist or postulates the existence of a “life energy” that no scientific instrument can detect and the ability to manipulate that life energy that no scientist can prove, it’s a pretty safe bet that that treatment is a pair of fetid dingo’s kidneys. Outside of these sorts of cases, though, clinical trials and epidemiological studies are the second pillar of science-based medicine, in particular clinical trials, which is where the “rubber hits the road,” so to speak. In clinical trials, we take observations from the laboratory that have led to treatments and test them in humans. The idea is to test for both safety and efficacy and then to begin to figure out which patients are most likely to benefit from the new treatment.

Over the last 50 or 60 years, for all its flaws (and what system devised by humans doesn’t have flaws?) it’s been a highly effective system. When it works well, physicians take observations from the clinic, go to the laboratory, where basic scientists and physicians try to figure out what’s going on to explain a particular observation and then develop an intervention, after which that intervention, be it a drug, procedure, or other treatment, is taken back to the clinic to test. In practice, this process can be very messy, as biases such as publication bias, selection bias, and other confounding factors can at times mislead. Money can corrupt the process as well, given that clinical trials are the final common pathway to the approval of new drugs by the Food and Drug Administration and the hundreds of millions of dollars it costs pharmaceutical companies to bring a single drug from bench to final phase III clinical trials in the hopes of recouping that investment and making large profits besides. Despite all that, no one has as yet been able to propose a better process.

That’s not to say that periodically there aren’t proposals to radically reinvent the clinical trials process. Certainly, I can sympathize to a point; being involved in clinical trials myself, I understand how even a relatively small clinical trial involves an enormous amount of time, money, and regulatory hurdles to jump over. I’ve never personally run a large phase III trial (although I hope to some day); so I can only know what that would be like from my interactions with colleagues who have. In any case, it’s the onerous nature of the current clinical trial system that has led to a recent editorial published in Science by Andrew Grove, former Chief Executive Officer of Intel Corporation and a patient advocate at the University of California, San Francisco, entitled, appropriately enough, Rethinking Clinical Trials. From the article, it’s obvious that Groves is not a scientist, but that doesn’t mean he isn’t worth listening to—to a point. Unfortunately, his proposed solution is unlikely to work, even though he does have a grasp of the problem:

The biomedical industry spends over $50 billion per year on research and development and produces some 20 new drugs. One reason for this disappointing output is the byzantine U.S. clinical trial system that requires large numbers of patients. Half of all trials are delayed, 80 to 90% of them because of a shortage of trial participants. Patient limitations also cause large and unpredicted expenses to pharmaceutical and biotech companies as they are forced to tread water. As the industry moves toward biologics and personalized medicine, this limitation will become even greater. A breakthrough in regulation is needed to create a system that does more with fewer patients.

Groves does have a point in that the clinical trial system in this country has indeed become quite expensive and unwieldy. He’s also correct that the evolution towards “personalized medicine” will exacerbate the problem. The reason is that, as we check more and more biomarkers or genetic markers to guide therapy, we will decrease the number of patients falling into each category requiring a certain drug, in essence, slicing and dicing the patient population into ever smaller slivers, each of whose treatment will be different. Sorting all this out will be quite difficult. Unfortunately, Groves approaches the problem from the wrong perspective in that it’s clear he has little feeling for how science should be applied to medicine, as will become clear by his analogy:

The current clinical trial system in the United States is more than 50 years old. Its architecture was conceived when electronic manipulation of data was limited, slow, and expensive. Since then, network and connectivity costs have declined ten thousand–fold, data storage costs over a million-fold, and computation costs by an even larger factor. Today, complex and powerful applications like electronic commerce are deployed on a large scale. Amazon.com is a good example. A large database of customers and products form the kernel of its operation. A customer’s characteristics (like buying history and preferences) are observed and stored. Customers can be grouped and the buying behavior of any individual or group can be compared with corresponding behavior of others. Amazon can also track how a group or an individual responds to an outside action (such as advertising).

Yes, you heard that right. Groves thinks that doing science is enough like cataloging customer orders, preferences, and history the way Amazon.com does. So what’s his suggestion? In essence, Groves is proposing what is commonly known a “pragmatic trial” but on megadoses of steroids, all using computers to figure out what’s going on:

We might conceptualize an “e-trial” system along similar lines. Drug safety would continue to be ensured by the U.S. Food and Drug Administration. While safety-focused Phase I trials would continue under their jurisdiction, establishing efficacy would no longer be under their purview. Once safety is proven, patients could access the medicine in question through qualified physicians. Patients’ responses to a drug would be stored in a database, along with their medical histories. Patient identity would be protected by biometric identifiers, and the database would be open to qualified medical researchers as a “commons.” The response of any patient or group of patients to a drug or treatment would be tracked and compared to those of others in the database who were treated in a different manner or not at all. These comparisons would provide insights into the factors that determine real-life efficacy: how individuals or subgroups respond to the drug. This would liberate drugs from the tyranny of the averages that characterize trial information today. The technology would facilitate such comparisons at incredible speeds and could quickly highlight negative results. As the patient population in the database grows and time passes, analysis of the data would also provide the information needed to conduct postmarketing studies and comparative effectiveness research.

I found out about Andy Groves’ article from Derek Lowe, who didn’t think that much of it but didn’t dismiss it altogether. I tend to agree, although I suspect I’ll end up being a little bit harder on it than Derek is. And it’s not an altogether crazy idea. It’s not even necessarily that bad an idea, except that Groves clearly doesn’t understand clinical trials, and you have to understand clinical trials before you can apply technology to it. For example, Groves seems to labor under the delusion that phase I trials prove safety of a new medication. That is a gross misunderstanding of the purpose of the phase I trial. Yes, checking for safety is part of what a phase I trial does, but a phase I trial doesn’t “prove safety.” What a phase I trial does is to rule out any really major side effects or toxicities that are common (remember, phase I trials usually only have around 20 to 100 participants, too small a number to catch uncommon adverse events), study pharmacokinetics (how the drug level varies with dose and how it’s metabolized), and establish both a maximal tolerated dose and a dosing interval. This last purpose is usually achieved using a technique as dose escalation Often phase I trials are performed using healthy volunteers, although in my specialty (cancer) that’s rarely the case. In any case, a better way of describing the purpose of a phase I was summed up by Freedman, “[T]he reason for conducting the trial is to discover the point at which a compound is too poisonous to administer.” That’s exactly what I meant by “maximal tolerated dose.”

Yes, that is the purpose of a phase I “first in humans” clinical trial. It’s absolutely necessary, too.

Here’s the problem with Groves’ idea. What he is basically proposing is to do, in essence, a whole bunch of “N of 1″ trials, each patient being a clinical trial in and of himself or herself. Then, through the magic of computer technology, he seems to be suggesting that we take all these “N of 1″ trials and try to do a meta-analysis of them. Here, we have a case where more does not necessarily mean better. What will result are data that are ridiculously heterogeneous—possibly unanalyzably so. As Derek Lowe points out, one of the most difficult aspects of clinical trial design is to standardize the treatment, to make sure that patients across multiple clinical trial sites are actually being treated and followed in the same way. Under Groves’ concept, heterogeneity is a feature, not a bug. However, it is not this aspect that bothers me so much about this proposal. Rather, it’s Groves’ dismissive comment about “liberating” clinical trials from the “tyranny of averages.” As if averages are a bad thing! That “tyranny of averages” is what makes sure that the patients being enrolled in a clinical trial are comparable to each other. Without relatively strict inclusion criteria in early phase II trials, the most likely thing that would happen if Groves’ proposal were adopted is that any signal would be drowned out by all the noise due to the heterogeneity of the patients and the data derived from each “N of 1″ trial.

Perhaps the biggest practical problem with Groves’ idea is how patients will be selected for therapies. Notice how Groves says that “once safety is proven, patients could access the medicine in question through qualified physicians.” There’s another problem with this concept other than the lack of recognition of the fact that phase I trials don’t “prove safety,” and that’s the issue of who decides which patients will take the drug, and basically it appears to me that what Groves is proposing is that any physician can take any drug that has passed phase I testing and offer it to any patient. As much as Groves prattles on about “real world” efficacy, this is a real world recipe for disaster. First, phase I trials do not demonstrate efficacy; they only evaluate safety and toxicity. Consequently, it is difficult (for me, at least) to imagine how physicians could ethically administer drugs whose efficacy has not been demonstrated or, more importantly, how they could know for which patients these new drugs would be appropriate. (Short answer: They can’t.”) It’s difficult enough to maintain clinical equipoise.

Indeed, one huge unspokean (and unsupported) assumption is that allowing unfettered access to experimental drugs that have passed phase I trials would help more people than it would hurt. In actuality, because phase I trials only identify acute toxicities and do not identify adverse reactions that occur with longer use, physicians administering these drugs would be flying almost blind. The potential for harm is enormous, particularly when it is powerful chemotherapeutic agents that are being given. It is far more likely that widespread use of unapproved substances would harm far more patients than it would help. Indeed, at the level of the individual patient, trying such drugs is more likely to harm than help. If there’s one thing worse than dying of cancer, it’s making one’s last days shorter and more miserable with toxicities from unapproved drugs or, even worse still, paying big bucks to do so.

Yet, somehow Groves seems not to have considered this possibility.

Perhaps the most problematic aspect of Groves’ entire proposal, though, is the very reason why we do clinical trials, namely to answer the question, “Does the drug work?” In a system such as that proposed by Groves, how, exactly, would we figure out whether a drug works or not? What would be the endpoint? What result would tell us that the drug is doing what it is intended to do? For example, in the case of cancer chemotherapy drugs, the purpose of the drug is to prolong survival. Figuring out if a new drug did that is difficult enough in the current system of clinical trials. Indeed, we already know from the example of Avastin in breast cancer that teasing out whether an improvement in disease-free survival translates into an improvement in overall survival. Under Groves’ proposal, it would be well nigh impossible. Groves seems to be arguing that, if we just keep track of enough variables and possible confounding factors, everything will shake out due to the wonder of modern computerized “e-commerce”-style tracking applied to patients. Maybe that’s possible. Maybe (and more likely) such a system will result in an uninterpretable mass of data from which extracting meaningful correlations will be at best problematic, at worst impossible. Even if it does work, then what is the endpoint of a clinical trial? When can investigators declare that they’ve accrued enough patients.

Remember how I referred to Groves’ proposal as being in essence replacing the current clinical trial system with that of “pragmatic trials”? We’ve been very critical here at SBM of the use and abuse of pragmatic trials by proponents of quackademic medicine. In fact, more than anything else, what Groves is proposing comes across to me as a high tech version of the very same pragmatic trials that acupuncturists are agitating for. There are no controls, which means that placebo responses will go uncorrected for. There are a plethora of variables and potential confounding factors, which would also be unaccounted for.

Don’t get me wrong. I’m not dismissing Groves’ idea; I’m merely pointing out that he has an incredibly simplistic view of how clinical trials operate and what evidence must be obtained before it’s reasonable to conclude that a new treatment “works” for a particular illness. Basically, spurred on by his own personal battles with prostate cancer and Parkinson’s disease, he has had a late life conversion to patient advocate. There’s nothing wrong with that and much to be admired, but unfortunately Groves seems to think that his knowledge of the computer and semiconductor industry is easily transferable to the pharmaceutical industry. It’s not for nothing that four years ago Derek Lowe also referred to Groves as Rich, Famous, Smart, and Wrong. Groves expresses frustration at the slow pace of research into Parkinson’s disease and other diseases. Fair enough. If I had a relentless degenerative neurological condition that would slowly rob me of my ability to function (and, in particular, to do surgery and write), I’d be frustrated too. Unfortunately, he doesn’t seem to understand that medicine is not the semiconductor industry. There’s a reason why we haven’t cured cancer yet, for example. It’s damned hard, and biomedical research does not lend itself easily to the sort of deadline-driven mentality that Groves had as CEO of Intel.

Derek Lowe put it well:

Mr. Grove, here’s the short form: medical research is different than semiconductor research. It’s harder. Ever seen one of those huge blow-ups of a chip’s architecture? It’s awe-inspiring, the amount of detail that’s crammed into such a small space. And guess what — it’s nothing, it’s the instructions on the back of a shampoo bottle compared to the complexity of a living system.

That’s partly because we didn’t build them. Making the things from the ground up is a real advantage when it comes to understanding them, but we started studying life after it had a few billion years head start. What’s more, Intel chips are (presumably) actively designed to be comprehensible and efficient, whereas living systems — sorry, Intelligent Design people — have been glued together by relentless random tinkering. Mr. Grove, you can print out the technical specs for your chips. We don’t have them for cells.

And believe me, there are a lot more different types of cells than there are chips. Think of the untold number of different bacteria, all mutating and evolving while you look at them. Move on to all the so-called simple organisms, your roundworms and fruit flies, which have occupied generations of scientists and still not given up their biggest and most important mysteries. Keep on until you hit the lower mammals, the rats and mice that we run our efficacy and tox models in. Notice how many different kinds there are, and reflect on how much we really know about how they differ from each other and from us. Now you’re ready for human patients, in all their huge, insane variety. Genetically we’re a mighty hodgepodge, and when you add environment to that it’s a wonder that any drug works at all.

It is, indeed.

None of this is to imply that we can’t improve our clinical trials system. As has been pointed out, it’s hugely expensive and inefficient, and these problems are getting worse with the evolution of drug treatment towards “personalized medicine.” We are going to have to figure out ways to make clinical trials smaller and more targeted. We are also going to have to find ways to extract every last bit of information and benefit out of every last clinical trials subject. An approach such as what Groves proposes might well contribute to achieving that aim, particularly when coupled with new trial designs that emphasize the incorporation of biomarkers for drug response. Contrary to Andy Groves’ claims, however, there is no way his sort of approach will ever replace well-designed clinical trials.

Posted in: Clinical Trials, Science and Medicine

Leave a Comment (33) ↓

33 thoughts on “The wrong way to “open up” clinical trials

  1. daedalus2u says:

    Good article. Not much to add. The idea that lots of data could be collected and then computers could just sort it out is just complete nonsense. If that idea will work for drugs, it should work for things like diet and exercise. It should work a lot better for things like diet because everyone has a diet.

    What something like this would do is open the flood gates to CAM. If the threshold is to show that something is not acutely and obviously harmful, most CAM could meet that standard. Things like the Gonzales Protocol, the Lupron Protocol, and chelation would meet that standard. IV urine could meet that standard too.

    Maybe Andy Groves doesn’t appreciate how wacky some CAM treatments are.

  2. MT says:

    The ethical issues in Groves’ proposal are the most concerning ones to me. I just don’t see how these can be overcome. How does a physician prescribe an experimental drug of unknown efficacy and safety when the patient/subject may not even be included in any type of analysis?

    The other thing is how is any of the data entered into this database going to be validated? What would keep an unscrupulous outfit for essentially rigging any such “study”?

    Of course, this doesn’t even include concerns such as selection bias, data entry bias, and so forth that are not malicious, but can certainly detrimentally effect outcomes.

  3. David Gorski says:

    In all fairness, I don’t think that the thought that such a system could be easily abused as a means of seemingly legitimize CAM ever entered Groves’ mind. Most likely, he has no clue about that aspect of his idea, just as he has no clue about clinical trials. It wouldn’t enter most physicians’ heads that this would be a possibility because most physicians don’t think about CAM the way we do.

  4. windriven says:

    “Perhaps the most problematic aspect of Groves’ entire proposal, though, is the very reason why we do clinical trials, namely to answer the question, “Does the drug work?”

    I have not read Groves’ article, only the excerpts that Dr. Gorski selected. But it seems to me that the question Groves is trying to find a new way to answer is not, “does the drug work” but “for whom does the drug work.” That seems to me a useful question to ask.

    Groves, however, seems to presume the data on safety and efficacy to be clearly bifurcated. In fact, many therapies demand a careful analysis of risk versus potential reward. This does not lend itself to widespread n=1 testing without unacceptable levels of risk. Further, how does one secure meaningful informed consent?

  5. windriven says:

    Don Lapre, alleged perpetrator of a $50 million vitamin scam (“The World’s Greatest Vitamin”), has been found dead in his jail cell in AZ, an apparent suicide.

  6. Ed Whitney says:

    “…what Groves is proposing comes across to me as a high tech version of the very same pragmatic trials that acupuncturists are agitating for. There are no controls, which means that placebo responses will go uncorrected for…”
    I want to make sure I am correctly construing this sentence. It seems to imply that pragmatic trials lack controls. But both pragmatic and explanatory trials involve randomization of controls; the main contrasts are with respect to participant eligibility and the flexibility of implementation of intervention and analysis of the primary outcome.
    The two approaches exist on a continuum, which is what motivated the development of the PRECIS (pragmatic-explanatory continuum indicator summary) published in CMAJ (free access):
    http://www.ncbi.nlm.nih.gov/pubmed/19372436
    It is not clear that Groves’ proposal really amounts to an advocacy of more pragmatic trials; the latter are nor generally best characterized as a large number of N of 1 trials.

    As for the “tyranny of averages,” Groves has a point if he is referring to the tyranny of a comparison of mean responses on an outcome whose distribution violates the assumptions of the statistical test involved. Pain medicine (and, I assume, cancer therapy literature) have many precautions about this topic, given the role of variable cell-surface receptors on the action of the drug of interest.

    That said, I agree that Groves has overestimated the role of number-crunching in the advancement of the life sciences. It is true that randomized clinical trials were conceived during the stone age of modern computing, but the astronomical amount of variability in biological systems makes it problematic to apply methods of Amazon customer preferences to their analysis.

  7. Ed Whitney says:

    Make that “Grove.”

  8. windriven says:

    @ Ed Whitney

    Many thanks for the correction.

  9. passionlessDrone says:

    Hello friends –

    Nice article.

    Maybe what we could try would be to apply the bioinformatic approach to drugs that already get through (or squeak through) to phase IV. It wouldn’t help with getting new drugs approved, but we could still learn a lot about interactions which are plenty murky at the time that some drugs get past phase III. Learning more is better than learning less.

    This would get us past some of the ethical questions, but nobody except the government is going to want to pay for it.

    - pD

  10. WilliamLawrenceUtridge says:

    After Leo Szilard switched from physics to biology as a field of study, he never had a peaceful bath. With a physics problem he could hold all the parameters in his head while soaking, due to the essential neatness of the constraints and laws that govern them. Once he took up biology he was forever climbing out of the bath to look up facts (from The Great Influenza, sadly I don’t know the page number). Transistor and chip design are far closer to phsyics than they are to medicine. Transistors don’t have feedback loops, homeostasis, long-term adaptations or significant variability; once your materials are essentially standardized, you just keep combining them in the same way. Chips don’t eat. They don’t mature. They don’t self-regulate unpredictably. They don’t exhibit genetic diversity. You can manipulate single variables relatively easily. Chips aren’t people.

    Though massive computational power will doubtless be very useful for biology, it would seem at best a starting point rather than a conclusion. A massive database of facts that can be fished for correlations might be useful for hypothesis generation, but ultimately you have to test them with clinical trials.

    I’m guessing in 20 years or so we’ll be a lot closer to being able to use a blood and/or gene test to guess at a person’s reactions to a single drug, but you still have to test every single gene against every single drug. I shudder at the complexity, but once a database has been established, I’m guessing it’ll provide the same kind of jump in medical effectiveness that we saw with the switch to science- and laboratory-based rather than tradition-based medicine.

    The comments on Lowe’s posts are a variation on “doctors are so stupid” and “Big Pharma is so evil”. Not a one seemed to recognize Lowe’s central argument – biological complexity > chip design complexity.

  11. WilliamLawrenceUtridge says:

    Oops, my comments apply to Lowe’s 2007 post, I’m only just reading his 2011 one.

  12. ConspicuousCarl says:

    Grove’s suggestion reminds me of something once said by the wise poet Homer:

    With today’s modern cars you can’t get lost, what with all the silicon chips and such.

    Enormous processing power does not save you from getting lost in confounded observational data. Having all of that data would probably be a great thing, but it wouldn’t be a shortcut for avoiding structured trials. Let Grove never speak of this shortcut again.

  13. JPZ says:

    @daedalus2u

    “The idea that lots of data could be collected and then computers could just sort it out is just complete nonsense.”

    I disagree. Mr. Grove may be on to something with this suggestion. The techniques for analyzing and predicting the behaviors of an internet user are advanced and supported by considerable computing power. Marketers are sponges for ANY data about millions of users and their online habits yet somehow they keep the data organized and use it effectively. Clinical trial data collection and management are in their infancy by comparison, and study designs are increasingly limiting data collection to minimal essential endpoints (e.g. I was working at a pharmaceutical company, and we were told to stop collecting birthdates because we only needed the year). Better data management would allow nearly unlimited endpoint collection and analysis (subject to costs) as well as improve safety monitoring. AE and SAE monitoring could become dynamic with predictive algorhythms updating exam questions based on emerging AE trends in real time and customized to the individual patient. These are all techniques used in internet marketing.

  14. Sorry if this is redundant, didn’t get a chance to read the comment.

    Seems to me you shouldn’t have a software guy come up with a new clinical trial system AND you should have a medical person come up with a clinical trial system. You need to get some excellent software folks who can think of ways to use current technology, talk with some excellent clinical trial folks and come up with technology ideas TOGETHER to advance the system.

    It might be worth considering that medical folks can be a bit of a bunch of naysayers, this is one reason medical records systems are only now becoming current, when the technology was available fifteen or so years ago.

  15. daedalus2u says:

    JPZ, physiology is probably at least a few hundred orders of magnitude more complicated than the simple marketing stuff that Amazon does.

    How many degrees of freedom are modeled in marketing? A few hundred? In physiology there are at least hundreds of thousands of degrees of freedom per cell.

    How do the marketers model these things? Via linear combinations that are independent? In physiology they are all non-linear and coupled with hysteresis, time dependence and feedback and virtually all of it remains completely unknown.

    You simply don’t appreciate the degree of complexity there is in physiology.

    WLU, cells don’t have homeostasis either. That is one of the simplistic myths that biologists adopted because they couldn’t handle the complexity of the truth.

    http://daedalus2u.blogspot.com/2008/01/myth-of-homeostasis-implications-for.html

  16. DU2 “physiology is probably at least a few hundred orders of magnitude more complicated than the simple marketing stuff that Amazon does.”

    Ummm, so? We aren’t talking about reverse engineering human life, we are talking about tracking the data that is collected from a trial. I can see how choosing subjects and consistent treatment across the trial would be required, but I think this emphasis on how much more complex physiology is a distraction from discussing specific needs and concerns.

  17. rork says:

    Nice article.

    I’m with passionlessDrone, in that after approval of a drug, we could use more data on the folks getting it to come up with new ideas on who is benefiting more or less. I was hoping universal health care, with ever-more-standard electronic records, would help.

    Grove seems to think company math folks are better than those in academia. In my experience that is rarely true. Apologies to Eric Shadt and a few others.

  18. rork says:

    Schadt. More apologies.

  19. WilliamLawrenceUtridge says:

    D2U, you’re either brilliant to a degree I can’t even comprehend, or really, really love a good nitpick. Your homeostasis post was interesting, much of it was above my head, I learned a bunch of neat stuff, but I can’t say I’m convinced. Seems like homeostasis is a valid concept if approached as a body-wide (really, blood-based, but blood perfuses the whole body) range of floor and ceiling setpoints that can be adjusted up or down depending on circumstances. The fight or flight example doesn’t really apply to homeostasis in general simply because it’s an acute response that must stop – or the person fighting or fleeing will die. Once the F/F response ends, the normal setpoints resume – blood is cooled by sweating, blood sugar is normalized, muscles cease cannibalizing protein into ATP, fluids and macronutrients are replenished through food and drink, and cellular constituents are regenerated (sometimes to supercompensatory levels – the training effect caused by exercise).

    But let me be the first to concede that you’ve thought about it WAAAAAAAYYYYY more than I have, and your grasp of biology and physiology is as far above my own as my understanding is above that of a marsupial (any marsupial, take your pick; I’ll even expand it to mammalian rodents). I may re-read your post at a future point, I’m always a fan of food for thought. Here’s hoping you win a Nobel prize for it one day :)

  20. JPZ says:

    @daedalus2u

    I think micheleinmichigan pointed out your misread of the main theme. It is a bit of “ignoratio elenchi” in that you are right that modeling complex biological processes is likely beyond the data management techniques of internet marketing companies, but my point was that clinical trial data management is overly simplistic compared to internet user data management.

    “You simply don’t appreciate the degree of complexity there is in physiology.”

    I am not sure how this ad hominem helped your position – especially when you were arguing the wrong topic.

  21. daedalus2u says:

    WLU, the fight-or-flight state is just as “normal” as any other state.

    The fight-or-flight state must be controlled just as much as any other state must be controlled. Perhaps even more because the margins are much smaller (the margin to damage and even fatal damage is much smaller).

    JPZ, my understanding of the post was that the idea was to replace clinical trials with open label prescribing by physicians of any drug that has passed “safety tests” and then use the magic of large numbers and high computing power to find out if the drug works or not in these open label uncontrolled “trials”. I have no doubt that clinical trials could be done better, but doing away with them and substituting open-label marketing-style data collection isn’t going to be an improvement.

    What is important in any detection process, and a clinical trial is the attempted detection of the signal of whether the drug is good or bad, is the signal to noise ratio. What ever isn’t “signal” is noise. Increasing the signal by a factor of 10 doesn’t help if the noise is increased by a factor of 10,000. The “signal” of both good and bad effects gets lost in the noise.

    The goal of a marketing program, how many people will buy product xyz given exposure to ad ABC doesn’t need high fidelity individual results. Knowing that X% will buy given a particular ad is enough. The mechanism doesn’t matter in an ad campaign, interactions don’t matter. A “placebo” effect that causes people to buy stuff is just as good as a “therapeutic” effect that causes people to buy stuff. In marketing you are looking for a big and very easily quantifiable signal, how many items were purchased. In clinical trials you are looking for small and hard to quantify signals, how many people had what kind of recovery and how many people had what kind of adverse effects of what seriousness.

    In a clinical trial, you need to know how many people are helped and how many people are hurt and in what ways, and to find that out you need high fidelity individual results. How do you deal with placebo effects in an open label and very heterogeneous uncontrolled trial?

    There have been instances where double blinded trials have been stopped because of an adverse reaction and when the blinding codes are broken, the adverse reaction was in the placebo leg. If there is an adverse reaction in an unblinded trial, how does one figure out if it is a real adverse effect or if it is just a coincidence? An example would be neck artery dissection as a consequence of neck manipulation. Many chiropractors swear that all the examples of someone having a neck artery dissection following a chiropractic manipulation are coincidence. How do computers figure out if those artery dissections are the signal of an adverse effect or the noise of a beneficial effect?

  22. JPZ says:

    @daedalus2u

    Yes, Dr. Gorski pointed out that Mr. Grove does not understand how to conduct clinical trials and that some of Mr. Grove’s suggestions are not medically sound. Your attack:

    “The idea that lots of data could be collected and then computers could just sort it out is just complete nonsense.”

    Was directed at one of the more sound ideas proposed by Mr. Grove, but perhaps it would help to take that one idea out of Mr. Grove’s context and put it into an appropriate one. Perhaps I should have said, “Better data management would allow nearly unlimited endpoint collection and analysis (subject to costs) as well as improve safety monitoring [when used in the context of well-designed randomized placebo-controlled trial with appropriate endpoints and statistical design]“. I certainly don’t support the medically-unsound portions of Mr. Grove’s comments.

    It is a tad bit of a straw man argument that you made tarring me with Mr. Grove’s statements. I did say to start off “…with this suggestion” so as to distance myself from his other statements, but perhaps that wasn’t clear. I hope I have clarified that I support using advanced data analysis techniques in clinical trial data management not clinical trial design.

    And, in terms of data quality, I would believe that there are databases listing every website our CPUs have ever visited and that they are incredibly accurate – unless we take extraordinary security measures.

  23. It seems to me, from the perspective of someone who knows very little about clinical trials AND software engineering, there could be some marketing application that might help clinical trials.

    For instance patient recruitment might be one area where online target marketing might conceivably be used. Say someone is googling for weight loss aids or other health concern. They could receive a question asking if they are interested in clinical trials, be directed to a questionnaire to determine what requirements they meet and then given contact information on applicable trials. Of course you have the questions of controlling the content to limit it to legal trials, determining applicable patients by location or treating remotely, but it’s an interesting problem. Of course only useful if there is an issue recruiting enough patients.

  24. thatguybil81 says:

    Maybe what we could try would be to apply the bioinformatic approach to drugs that already get through (or squeak through) to phase IV.

    Andy Groves aproach would work very well to phase IV post marketing studies.

    Indeed as more and more patient data gets put into databases with “meaningfull use” electronic medical records it will be easier to do retrospective studies on large groups of people.

    The medical field ingeneral is terribly behind in using IT to manage data.

  25. Ed Whitney says:

    Very interesting suggestion, Michele. Recruitment of patients is always a problem. Recruitment by newspaper announcement has been done in the past, but is probably as obsolete as the rotary telephone fixed to the kitchen wall. Screening for eligibility will still be labor-intensive, but facilitating recruitment should be endorsed by most investigators.

  26. Vera Montanum says:

    Retrospective data-mining studies are being done today on huge repositories of e-information (eg, Medicare/Medicaid recipients in the U.S.) — at least in the pain management field — and there are significant dangers. Depending on the biases of the researchers, and plenty of computing power, outcomes can be easily manipulated to “prove” their hypotheses. With such huge numbers of subjects, statistically significant results can be demonstrated for clinically trivial effect sizes; yet, the journals and news media only pick up on the dramatic “new” findings. This is happening today, and it does not bode well for the future of serious science — in my opinion.

  27. daedalus2u says:

    One of the important aspects of clinical trials is maintaining patient confidentiality. If you have the birthday and zip code of someone and just a few other identifying bits of information it is pretty easy to uniquely identify that person.

    I suspect that is why they didn’t want birthday in a patient database. If there is identifying information, the database has to be kept under much more strict security than if there is not. If potentially identifying information is not needed, it is better to leave it out completely. The “cost” isn’t in hardware or coding, it is from potential liability from violating privacy regulations.

    I appreciate that marketers want unique identifying information so they can target sales pitches to individual customers and customize pricing for individual customers to optimize profit per sale. These are not things that customers usually want. Of course health and life insurance companies would want individualized health information so they can customize pricing to optimize profit. How do individuals benefit from this? Of course CAM and supplement marketers would want individual health information so they can market their stuff directly or indirectly to individual potential customers.

    I think it would make doing research more difficult if people’s individual health information could not remain secure. Who would volunteer if individualized information was going to become public? Once there is any kind of DNA sequence testing included, then that sequence all by itself would be a unique identifier.

    The use of retrospective analysis of HMO databases is interesting. I have read where results have shown the utility of flu vaccine for elderly patients based on seven hundred thousand person-seasons of exposure. That isn’t something you could do prospectively because flu vaccine has a known benefit and it would cost too much.

    http://www.ncbi.nlm.nih.gov/pubmed/17914038

    But even for something as simple as vaccination and as simple as all cause mortality, the causal relationship between treatment and outcome is not so simple even with excellent record keeping and a pretty homogeneous population of very large size.

    http://www.ncbi.nlm.nih.gov/pubmed/19625341

    There is also potential for misuse of such data. There are reports that the Geiers (quacks of Lupron infamy) tried to copy patient identifying information from an HMO database. My presumption is that it was to identify potential clients to scam.

    http://www.casewatch.org/fdawarning/rsch/geier.shtml

    Very large health care databases with identifying patient information would be a tremendous marketing resource for CAM, supplement sellers, malpractice lawyers, health insurance companies, and others. The cost to individual health care users could be very high. Without a social contract that benefits of inclusion of individual data in large databases would be equitably shared, why should anyone agree to it?

    If Intel opened up their design databases to crowd-sourced commenting, just think how much faster progress could be made. I wonder why Intel doesn’t do it?

  28. JPZ says:

    Clinical trials are “opt in” situations. The subjects/patients give permission for data collection and are told what data will be collected. HIPPA and other privacy laws allow the patient to control who has access to their data. The person or group running the trial already has permission to use the data. If the person or group willingly or unwillingly releases the medical data, they are open to lawsuits and penalities. How much information you gather on a subject/patient is not limited by these parameters.

    You’ll have to resort to cod if you keep pitching red herrings. ;) Did you just jump from improving clinical trial data management to giving away patient identifying information to homeopaths?? HIPPA created penalties for unauthorized sharing of patient information, so that is one red herring. Databases of patient information can be shared IF they are stripped of patient identifying information (it is an extensive process and the subject of a lot of debate in clinical research). The Geiers criminal activity does not de-legitimize the lawful and productive use of patient data – another red herring.

    BTW, my company wanted to collect only the birth year on infants.

  29. I am late to this discussion!
    Where is it specified in science that science ignores implausible theories?

    Bradford-Hill includes “analogy” in his suggested guide for evaluating epidemiological claims: a finding is more credible if it fits with processes already known.

    However, I cannot think of anywhere in epistemology where the implausible should be ignored.

    1. Unusual observations should spur investigation.

    2. It is normal across time and cultures that ritualistic activities are practiced for curative value.

    3. Cause-effect relations have been observed, and even used clinically, without a physiological understanding of the operation. Lister’s efforts at hand hygiene, at the same time that germ theory was developing, comes to mind. Lister found a clinical process that reduced infection rates, but he did not have much of a causal explanation – and his colleagues were quite skeptical since had-washing did not fit with the prevailing wisdom.

    This makes me conclude that some perceived cause-effect relation – reikki, etc., is worthy of being scientifically evaluated.

    For all of the woo out there, proper scientific process quickly indicates it is a dead-end.

    For reikki, homeopthy, etc., to carry on, they can only be sustained by discounting science – either ‘reikki” or “science” is wrong, and woo-meisters favor reiiki, so they explain why “science” does not apply.

    This is if and only if a proper set of trials have been conducted.

    So: where did DesCartes or Spinoza say we should not investigate the implausible?
    This adds up to the conclusion, for me, that it is reasonable

  30. JPZ says:

    You’ve misread the intent and use of Bradford-Hill criteria

    http://en.wikipedia.org/wiki/Epidemiology

    notably

    [Hill himself said "None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required sine qua non"]

  31. JPZ “You’ll have to resort to cod if you keep pitching red herrings. Did you just jump from improving clinical trial data management to giving away patient identifying information to homeopaths”

    Yup and selling personal medical information without permission to marketing companies and insurance companies as well. I’m very, very confused how any of those companies got any of that personal information from the clinical trials, websites and apps have privacy policies, mostly which restrict the use of selling personal identifying information third parties. Usually users are tracked by an anonymous IDs, not personal information.

    Is the assumption that the minute you start using data mining or marketing software you suddenly lose all ability to maintain privacy protocol? I don’t get it.

  32. Quackonomics says:

    Very Insightful as usual from SBM

  33. windriven says:

    FDA has withdrawn approval for Avastin to treat breast cancer.

Comments are closed.