One of the hottest topics in laboratory informatics discussions today encompasses externalization of scientific research and development. Organizations spanning many industries (petrochemicals, food and beverage, fine chemicals, pharma/biotech, and more) have increasingly outsourced a variety of R&D activities along the business value chain. In recent years, outsourcing of R&D has grown more rapidly, particularly in the pharma/biotech industry. Trying to extend existing informatics systems1 into an externalized world that they were never designed to address presents myriad more predicaments. Stringent regulations in development and the pressures of early discovery have, in turn, brought focus to the lack of systems in place to adequately handle data generated and shared by partner organizations.
The execution of biological and chemical research outside an organization’s walls creates a new layer of complexity in the topic of data management. While globalization of industries — with R&D labs scattered all over the world and teams collaborating between countries and different time zones — makes data management challenging, leadership can enable better management of this in-house data. The issues related to data generated by partners and contract research organizations (CROs) dispersed all over the world, however, are not so readily addressed. Let’s consider, for example, the impact of externalization specifically on analytical data management — a notorious source of disparate information.
Analytical data is frequently the ‘proof of identity’ for a sample, and the potential of losing this valuable information in the transfer of materials between contractor and client grows with outsourcing. To further antagonize/complicate the situation, analyses often span a multitude of techniques (including LC/MS, NMR, IR, Raman, thermal, etcetera), carried out on instruments from assorted vendors, each with their own proprietary data formats.
Since outsourcing of analytical chemistry is not new, methods for sharing data have evolved over time. Problematically, these methods are non-standardized and may span from the rare sharing of huge raw data files by CD, Microsoft SharePoint, or FTP servers; to the more common practice of results communicated via PDFs, documents or spreadsheets.
The advantage of sharing raw data is that all of the information is passed on to the client. The reality, however, is that scientists may not need every single piece of analytical data generated by a contractor, and non-standardized storage of this data without the context of why the experiment was run may make its storage pointless. While sharing results via PDF or similar documents means that viewing applications are fairly universal; recognizing the significant loss of detail and also of interactivity with the actual data and information has led ACD/Labs to coin the term for this as: ‘DEAD data.’
-
Read more: The Future-as-a-Service
‘DEAD data’ has the scientifically-rich information stripped away and reduced to some text strings and static images. As a result, ‘DEAD’ data is difficult to search, and impossible to re-process, re-analyze or compare with newly acquired ‘LIVE’ data sets. Unable to interrogate the data, scientists are left at a crippling disadvantage to re-use or make decisions based on it.
The current collaboration model is risky at best and, clearly, systems in place for analytical data sharing today are partly or wholly inadequate.2 Any time an organization undertakes activities in support of the research, development and manufacturing of a pharmaceutical drug, extra precautions must be in place to ensure those processes are carried out correctly and are backed by correct scientific decisions. As a director from a major pharmaceutical company recently attested “at the end of the day, we need to ensure that what we are proposing to the market is indeed what we tested in the clinic. We need to also understand, monitor and control things like metabolites and impurities that are associated with the API.”
Analytical data connects these things together and hides behind the scenes of these decisions. It is crucial, therefore, that organizations find an effective way to manage it. When this work is undertaken in the external network, there must be adequate systems for access to this data even after the partnership comes to an end (Figure 1).
The changing landscape requires organizations to tackle emerging issues from a technology perspective head-on. An ACD/Labs perspective is to pose the fundamental question: Is any of your data live or dead? Some live data and associated knowledge are surely essential. However, in some cases, not all data needs to be live. Not all data will be re-purposed in a useful way, data mined, or re-examined — meaning that organizations must be wary of creating a data dumpster.
- Can your organization automatically convert any vital analytical data to knowledge, managing and storing it so that it can be effectively accessed by scientists and other corporate decision makers?
- Which data do you need to have access to at a moment’s notice to drive key scientific and business decisions that help keep your organization educated, sustainable and innovative into the next several decades?
The key is to perform an internal audit and understand the types of data that are truly crucial to support the data driven strategies within an organization and follow through to keep that knowledge with live data. Furthermore, investigations into new systems to handle data must take into account the heterogeneous informatics landscape IT teams are struggling to manage and administer.
Every major R&D organization will plan to have informatics systems in place to manage parts of their data — commonly via ELN, LIMS, Registry, archive — but none are able to handle the wide array of analytical data generated both internally and externally, in a live environment. Unification of analytical data within an organization must come with an understanding that software solutions should be capable of integrating with the existing informatics landscape — adapting to changes — and should offer fingertip access via mobile and Web browser-friendly technology.
- Elliott, Michael H. (October 2011). Informatics Convergence Presents Opportunities and Challenges. Scientific Computing. Retrieved from http://www.scientificcomputing.com/articles/2011/11/informatics-convergence-presents-opportunities-and-challenges?cmpid=horizontalcontent
- Shanler, Michael. (October 2013). Convergence of Lab Informatics Creates Opportunities for Manufacturer Innovation. Gartner. Retrieved from https://www.gartner.com/doc/2603916/convergence-lab-informatics-creates-opportunities?
Sanji Bhalis Manager of Marketing Communications at ACD/Labs.