SDMS systems excel at storing and managing large quantities of information from a variety of sources
Laboratory information management systems (LIMS) are the mainstay of the modern laboratory. Their functionality has continued to expand so that they monitor most of the information related to sampling, testing and the reporting of results. For the more free-form testing end of the continuum, this service is handled by electronic laboratory notebooks (ELN). However, there are many other types of information generated in the modern laboratory. To enhance the value of this other information and make your operation as productive as possible, additional tools are needed. Complementing the functionality of a LIMS is an application known as a scientific data management system (SDMS), sometimes also referred to as an enterprise content management (ECM) system. Some in the industry may argue that, with the expanding functionality incorporated into current LIMS, ELN and SDMS products, these terms have become obsolete. However, as a generalization, it is still safe to say that a LIMS is primarily sample-oriented while an SDMS is more event-oriented, e.g. a report is issued.
SDMS provide an excellent way to store and manage large quantities of information from a variety of sources, as well as to comply with regulations such as the FDA’s 21 CFR Part 11 regulations regarding electronic signatures and data security. SDMS can capture data in a variety of ways; one of the frequently used approaches is to set up specialized print drivers to allow applications to ‘print’ directly into the SDMS repository. However, most also provide a variety of interfaces to allow users to directly push data into the system and for applications to check out files for processing.
The SDMS maintains copies of all of the various versions of a document so that you can go back to see what it consisted of at any point in time and see who made what changes to it. Of course, the value of the SDMS is maximized when it can extract metadata information from the stored files so that intelligent queries can be issued against it.
In my experience, most organizations that have an SDMS do not realize its full potential and, as a result, sorely under-utilize it. In some cases, they also may be using their SDMS in an inappropriate fashion because they haven’t looked at it in the context of the big picture. Examples are easy to come by. Let’s take a look at two of the more glaring:
All of the laboratory operations performed on Microsoft Excel and basic file storage. Excel comes to mind not because it’s an intrinsically bad program, but because it fails the data integrity test required for Good Automated Laboratory Practices (GALP). Basic reasons for failing include no user control (it can’t confirm who made an entry), no change control for macros (who changed them), and no audit trail support (what changes were made). Most true SDMS provide an audit trail overlay for Excel, as well as access controls, to bring it into compliance with current laboratory informatics regulations and guidelines.
Similarly, many labs simply store the raw and processed data files from instruments in a simple subdirectory. There is no provision to prevent a file from being edited or a completely different version being substituted. For that matter, there is generally nothing preventing the files from being deleted either, whether accidentally or deliberately. Even when the files are imported into a LIMS with full change control, it is frequently done by dropping the file into a polled subdirectory, eliminating any guarantee of the integrity of the data and violating multiple regulatory requirements and good practice recommendations.
Taking a holistic approach
Most well-implemented SDMS can eliminate all of the above issues — if they are used! It sometimes seems as if an organization brings in an SDMS to solve a specific problem and is so focused on it that they fail to consider the other problems it can solve. The critical piece here is to take the time to step back from the project and look at it holistically.
Make it a point to question all of your assumptions about the project and all of the manipulations that take place in it. A major mistake that many, if not most, people make when implementing a system is to try to bend it to mimic their current paper system, no matter how badly it might be working. This should be viewed as an opportunity to question all of the things you’re currently doing, along with why you are doing them, and to fully reengineer the system. It is by doing this that you are going to fully maximize the benefit of the entire overhaul.
As systems advance and it becomes easier to interface them, more organizations are likely to take advantage of the synergy of linking their LIMS and SDMS together. This trend will only be accelerated by the growing need to maintain readily retrievable information on testing and interpretation, whether for patent protection, potential litigation or regulatory demands. However, it will always be the responsibility of the given lab to ensure that it is used wisely and in the most productive fashion.
John Joyce is the LIMS manager for Virginia’s State Division of Consolidated Laboratory Services. He may be contacted at editor@ScientificComputing.com