With the emergence of new technology, the time and cost involved in dissecting reams of data has significantly reduced.
Drug researchers, however, still face a significant challenge in fully analysing the data available to them. Without really understanding what the data means, precision medicine will struggle to reach its full potential.
A good example of how studies can fail to deliver due to the inability to translate experimental findings into useful data, is the world’s largest ‘human sequencing operation.’ In an exploratory study titled “Clinical Interpretation and Implications of Whole- Genome Sequencing,” 12 adults from Stanford University Medical Center had their entire genome sequenced, with the aim of detecting clinically meaningful genetic variations. The results showed incomplete coverage of inherited disease genes, low reproducibility of detection of clinically relevant genes and disagreement among experts about which findings were most significant. For the most part, despite the scale of the project, the findings were not actionable.
In 2016, precision medicine gained a renewed focus after President Barack Obama invested $215 million “to broadly support research, development, and innovation.” His Precision Medicine Initiative highlighted how successful research could revolutionize the entire health care system and change disease outcomes drastically. But what does this mean in practice, and how can the increased collection of data lead to improved and more effective life sciences R&D?
All the gear, but still no idea
According to recent research from Forrester, the big data market is set to increase by 13% over the next five years. The continuous growth of big data has allowed current and emerging technologies to provide complex data specific to each human. This technology, however, far outpaces the ability to effectively mine data, to draw conclusions and develop clinically relevant products. To achieve this, scientists must be able to search and analyze various sources of evidence including experimental, clinical and published data to find out what has already been discovered.
Scientists must also process multiple types of datasets and use sophisticated entity recognition and pattern matching software to identify meaningful associations between targets and molecules. The difficulty is not the data alone, as 85 percent of medical data is unstructured, yet still clinically relevant, but rather knowing what the data means and how it can be applied to different research projects.
Lifting boundaries
Precision medicine R&D can only progress with the use of specific tools that can deal with the complex challenges precision medicine brings. For biopharmaceutical products to reliably deliver, researchers need a better understanding not only of genomics, but also of molecular pathways, proteomics and the impact of epigenetics (e.g., genetic changes due to environmental factors rather than simple DNA sequence variants) on disease susceptibility, development and progression. To overcome issues with data quality, tools are required to decipher information, as well as factors that contribute to resistance.
A lack of consistent standards can also hold precision medicine back. When such standards are in place, clinicians are likely to have more confidence in their findings and be able to validate their predictions. This can be achieved when using computational modelling/simulation techniques, if the algorithms are reliable and reproducible. In the long run, ensuring consistent standards will save labs both time and money.
The industry is currently trying to establish uniform analytical protocols which will enable R&D to more effectively progress from data generation to analysis. The standardization of data will allow scientists to work with, and combine information from, different data sets with ease in order to provide more accurate answers to R&D questions. Collaboration will also be far more fruitful, through simpler information exchange and utilization.
Precision medicine and cancer
The study of cancer is often first associated with precision medicine, and it is often described as the potential “cure” for the disease. However, a one-size-fits all approach to treating cancer rarely has a good clinical outcome. This is because there are more than 200 types of cancer alone; treatment is dependent on the type of cancer and how the patient reacts to therapy. Due to the difficulty of treating cancer, Former Vice President Joe Biden launched the Cancer Moonshot to unleash new breakthroughs, improve care, increase access to treatment and most importantly—to help find a cure for cancer.
Cancer medicine is applying precision therapies using a wealth of data from both new and old studies. By combining disease and patient-specific data, clinicians can make more targeted clinical decisions that improve outcomes and cancer patient care. A potential cure for a particular cancer will most likely occur by utilizing the correct drug combinations that fit the molecular/genomic profile of an individual patient.
Mastering the microbiome
Alongside push to precision medicine, there is currently growing interest around the study of microbiomes and their role in disease. The study of microbiomes is complex and will also demand specialist tools and standardized data to make sense of this field. Microbiomes have the potential to put big data modelling to the test. But with this complexity comes the possibility of a large margin of error. While there is already a lot of published data on microbiomes, some of it may not be useful at all, depending on the context and organism used in a study. It is already very clear that the microbiomes of humans are incredibly complex and variable. Regulatory paradigms will also need to change, as the field widens, with complex challenges to overcome—such as how to regulate donor Faecal Microbiota Transplantation for Clostridium Difficile infection.
With the sheer number of microbes in one human, and the complexity that brings, it’s crucial for this field to be explored more through comprehensive studies. Microbiomes could ultimately show more promise than existing therapeutics that try to target change in human cells. Filtering out the ‘noise’ and learning which changes will have the most chance for therapeutic success will be essential in making microbiome analysis usable in R&D.
Microbiome manipulation can certainly be seen as a disruptive trend, what we may find, for example, is that previous therapies were less effective in some patients due to the microbiome variability. We will also see the emergence of many new possible therapeutic opportunities, even for rare diseases, as a result of microbiome analysis. At the moment, it’s crucial we continue to work on both microbiomes and unravelling the human genome.
The potential of understanding microbiomes in the precision treatment of complex diseases is huge, but ultimately, to bring this promise to fruition we also need to know more about the epigenetics of the human genome at the same time. Both microbiome research and epigenetics add a layer of complexity to the overall disease management and drug development process which cannot be ignored.
There is no denying that challenges involved with precision medicine are complex. Despite this, precision medicine is the future—and it will inevitably become the expected method of treatment for many diseases. One of the main challenges holding precision medicine back are the institutions who are cautious to embrace new technologies. This, however, is likely to change as technology becomes more readily available, standardised and proven reliable. Issues with regulation and compliance exist but are being dealt with. Pressure from Obama and Biden has ensured that precision medicine continues to be the most sought after quest in the life sciences today.