Bench to bedside — and back to bench again

By:

Mar 20, 2015

Moving the needle from intuitive medicine (the treatment of symptoms) to precision medicine (the treatment of causes) is the critical innovation that will make curing disease more effective and more affordable. To develop the technologies that will enable precision medicine, we need innovations on both the basic and applied research fronts. While we’re making unprecedented strides in molecular biology and basic science, translating these ideas into targeted therapeutics has largely underperformed expectations. We need to find new ways to narrow this gap.

“Bench-to-bedside” research (B2B) is the process of translating basic science discoveries into clinical applications. The B2B model is linear—it begins with observations in the lab or clinic and ends with success or failure in clinical trials. This development methodology coalesced in an era of empirical observation, where almost all diseases were understood only within the context of symptoms within a population. This is most true of infectious diseases, where the same invasive microbe attacks each host via the same mechanism, resulting in similar symptoms. It’s therefore physiologically possible to develop a treatment, like an antibiotic, that works for virtually everyone with the same symptoms.

But for many other diseases, there are issues with this approach rooted in human physiology. Scientists have known for centuries that what we commonly label a disease is more likely a syndrome, or collection of diseases with similar observable symptoms, but different root causes. This is why during phase II or III clinical trials, most participants in a study will not show significant response to the new therapeutic. In fact, a 30% success rate versus placebo can often be deemed sufficient. When the drug is released into the market, a similar (or lower) percentage efficacy in the diseases’ general population is observed. This means that, for any given therapeutic prescribed based on symptoms, there is a good chance the patient is paying top-dollar for a drug that won’t really work.

For any given clinical trial, deviations from expected outcomes are treated as statistical noise, instead of separate disease pathways often rooted in genetic or epigenetic variation. In today’s B2B model, these anomalies amount to a footnote in the FDA-mandated risks declaration for legal protection. It does little more than offload medical risk onto the patient and prevents the drug company from developing a better drug. This is the core problem with the B2B research model. We need to find fast and low-cost ways to plug what doesn’t work – poor clinical results – back into the research supply chain. This would help refine disease categorizations, guide target therapies, and potentially lead to higher success rates for new drugs.

Unfortunately, the way clinical trials and post clinical evaluation are currently organized makes iterative processes difficult to implement. A more effective method would be a “bench-to-bedside-to-bench” (B2B2B) industry-scale model that facilitates iterative information flow between currently siloed system players. This would maximize the impact of each player’s lessons learned for the entire system.

The Institute of Medicine recently released a report calling for more transparent sharing of clinical trials data. Although this will only be truly effective in an integrated system architecture, we see it as an early movement toward the B2B2B model. Still, even if a concerted effort was made to get system players to adhere to such a strategy, achieving this on a large scale will be a monumental task. Much of this is rooted in the technical and information challenges that a new system architecture introduces.

The technical challenges are, once again, rooted in the complexity of human diseases. For example, cystic fibrosis (CF) is a genetic disease where a single mutation in the CFTR gene, ΔF508, accounts for two-thirds of CF cases worldwide – but about 1500 other CFTR mutations can cause the disease as well. As our scientific knowledge of the human body deepens, overcoming these complex pathways may pose too great of a logistical challenge for the human mind, or even collective human minds. We need a technological enabler that helps us spot disease motifs within the coming onslaught of genomic and biomarker data.

Modern computing could be the answer. As disruptive innovation theory would suggest, computing could provide a cost-effective solution for simple but costly tasks that are too time-consuming for humans.

The Washington Post recently reported that Mayo Clinic is using a Watson – a question-answering (QA) supercomputer developed by IBM – to match patients with appropriate clinical trials. Initially, Watson will be tasked with solving a complex logistical problem that people don’t want to do: how to pair people across the globe with similar symptoms to drugs that might save their lives.

We expect that if Watson or similar supercomputers are successful, they will assume even more complex roles in the clinical trials process. As genomic sequencing gets cheaper, it is feasible that the genetic, epigenetic, proteomic and metabolic profile of patients with various diseases will be determined and then cross-referenced using supercomputing capabilities. In fact, Watson is already being used at Memorial Sloan-Kettering Cancer Center in utilization management and even patient treatment decisions. Using a similar approach in clinical trials could lead to a fundamental shift in the way clinical trials are both designed and how results are used.

Clinical computing is already showing early promise. As “intelligent” Watson-like supercomputers enable pattern recognition among genetic data on very large patient population subsets, they be able to predict the effects of genetic and epigenetic variants on an unprecedented level and accurately observe the true outputs from clinical trials – and enable failure to become useful again at the “bench” level. Then, clinical trials may become an integral aspect of the iterative research process and be a key catalyst in shifting our understanding of many diseases from the intuitive to the precise.