Instrument Interfacing: Some Observations and Reflections
Interfacing your laboratory’s instruments with your data systems is a multilayered problem
One of the topics that people seemed to find bubbling up at Pittcon this year was the interfacing of laboratory instrumentation with laboratory informatics systems, be they laboratory information management systems (LIMS), clinical laboratory information systems (LIS) or electronic laboratory notebooks (ELN). However, while this upsurge of interest is new, if you are working in the field of laboratory informatics and have not been dragged into a discussion of interfacing your laboratory instruments with your data system, then I think it’s probably safe to say that you’re really not working with laboratory informatics. But, don’t worry, this has been a discussion going on since before the term LIMS had been coined and will likely be going on for some time.
It’s not a hard game to play, seeing how it’s extremely easy to come up with arguments for and against interfacing. The trick is being able to determine which arguments are valid, along with their relative merit, and which are just hand waving to support whatever it is that the proponent wants to do, even if that something is nothing.1 However, many of these arguments are amenable to a cost-benefit analysis, which should make all of the bean counters happy; you just need to make sure you are including all of the costs and benefits in your analysis.
I’ve seen way too many automation projects implemented in a sub-section of an organization and declared a resounding success because they resulted in improved efficiency and reduced cost in that section. Unfortunately, this improvement was not due to an overall efficiency gain, but from simply restructuring the process to push some of their work out to the other sub-sections, increasing someone else’s work load. If you factored in these other changes, the overall efficiency either stayed the same or actually decreased. The point being, make sure your analysis includes all impacts of the changes. What might look like an increased up-front cost may actually quickly pay for itself. Also, keep in mind that all costs and benefits are not necessarily directly financial.
Interfacing your laboratory’s instruments with your data systems is one such multilayered problem. In many analysis, you will see this argument put forward in the name of productivity. By having the analytical results directly transferred into the data system, you do gain a number of benefits. Among these are:
• Reduced manpower requirements, as you do not have to have people retyping analytical results into the data system, sometimes multiple times
• Faster sample turnaround, as results appear more rapidly in the data system and are available for reporting, since they do not need to be manually extracted from one system and typed into another
This might then be balanced against some of the obvious costs. Typical costs identified include:
• Cost of the personnel involved in interfacing the instruments
• Cost of additional hardware and/or software
• Sample load affected, e.g. are you looking at analyzing a few samples a year or thousands of samples per day? Put another way, over how many samples are you amortizing the interfacing cost?
Depending on the length of the period you specify for comparing the costs and benefits, it can be pretty easy to come to a conclusion that you really don’t get that big of a benefit from interfacing your instruments and data system. However, what I’ve described so far leaves out some of the biggest issues regarding laboratory operations.
While some labs may be working on such a thin margin that it is hard to step back and take a bigger view, I suspect that, if pressed, most labs would say that their biggest concern was the quality of their data and the analytical results they were reporting to their customers, whether they were internal or external to the company. After all, if the results you report are not reliable, it really doesn’t matter how many samples you analyze. While I feel that this concern should extend to all samples, those of you working in regulated environments will definitely appreciate what happens if your analysts end up reporting incorrect results for their proficiency samples. Failing these can quickly result in a company having to close its doors, but sending unreliable data out to customers can have the same effect, it may just take a little longer.
I’m sure that all of my quality assurance readers out there already suspect where I’m heading with this, but it’s really not all that arcane. Numerous scientific studies have shown that, when you place a manual operation, such as transcribing results from one system into another, into a process, the most likely failure point is that manual operation. This is not due to carelessness; it is a simple outcome of how the human brain works. Studies have shown that, when reviewing information, whether it is looking through a list of names or a table of numbers, there is a limit to how quickly the brain can process that information, resulting in what is known as an “attentional blink” as our focus of attention shifts.2
Studies looking at transcription errors find that it is not uncommon for a 3 percent error rate to exist in transcribed data, even after review, with the error rate occasionally being much higher. 3,4 Some of this variation may result from differences in the type of data with which you are dealing; some may result from variations in the data review rate you are trying to maintain. From personal experience in correcting erroneous data, I can confirm that, even when you have multiple people reviewing the data entered, errors will creep through.
For highly critical information, such as clinical test results, attempts are sometimes made to reduce this error rate by requiring dual data entry. For those unfamiliar with this process, you basically have two people entering the same results into the data system. If the values don’t match, the system flags them for review and correction. However, despite the additional manpower and time that this approach requires, because of the nature of the way the brain processes information, this does not eliminate the possibility of data entry, it simply reduces the rate of errors that slip through.
The closest we can come to eliminating transcription errors is to avoid the transcription process, specifically by interfacing the instruments with the data system to automate the whole process. Yes, doing this can result in potential errors due to bugs in the program or, even more rarely, due to noise introduced into the automated system, whether through cosmic rays, cross-talk in the cabling, or simply RF noise from poorly designed equipment. However, a solid validation of the system (You DO perform thorough system validations, carefully following recommended procedures don’t you?) will reduce these risks to far below those of transcription errors.
Oh, and there are other good reasons for interfacing your instruments as well. How critical these other reasons seem to you might depend on what it is you analyze and for what purpose.5 Most of these are covered under regulations and guidelines for Good Automated Laboratory Practices (GALP) and Good Automated Manufacturing Practices (GAMP), so you really should be following these practices anyway. However, for those needing more incentive and those who happen to be operating in one of the regulated industries, say pharmaceuticals, there are a variety of government regulations you must follow as well.
In the United States, one regulation that numerous companies have fallen afoul of has been the FDA’s 21 CFR Part 11 regulations on electronic signatures and data security, formally titled Electronic Records; Electronic Signatures; Final Rule. Given how few pages of actual regulations there are in this document, it is amazing the amount of discussion and effort that has gone into meeting them. What it boils down to in terms of what we have been discussing here is that systems must be designed in such a way to preserve the integrity of the data generated, and that the system must be able to identify all changes to the data and who made them.
Unfortunately, many systems fail miserably at this point. A surprising number of people and companies, including LIMS, LIS and SDMS providers, appear to think that they can interface an instrument with a system in any way they want as part of their attempt to meet this requirement. Unfortunately, one of the most common approaches is dumping a text file from the instrument, frequently in some type of comma delimited format, into a common directory that the data management system polls periodically and imports whatever it finds. However, the way most systems are constructed, as soon as you drop that file into the directory, you’ve lost control of it and can no longer certify that the data imported into the system polling the directory is the same as the data that was pushed to the directory. The fact that the file is commonly simple ASCII, which you can edit with any text editor, makes falsification of the data even easier.
In a properly designed system, there is an active handshake at every step to reduce the risk of data loss or corruption, whether accidental or deliberate. There is admittedly some leeway in this, as many companies still have older generation equipment that does not directly support a handshake. In some cases you actually have to capture data being directed to a printer port to allow the integration. Allowances are generally made for this, as long as you have SOPs in place that are designed to minimize this risk, and you can document that you following them!
Based on the above, I think you can see why there is an upsurge of interest in interfacing instruments and the umbrella laboratory informatics systems. If you haven’t been thinking about it or — even better — doing it, perhaps it’s time that you did!
John Joyce is the LIMS manager for Virginia’s State Division of Consolidated Laboratory Services. He may be contacted at editor@ScientificComputing.com.
1 Robert Pavlis, “Top 5 Myths about LIMS Interfacing, Scientific Computing,” Scientific Computing 21, no. 6 (5, 2004): 18-19, http://www.scientificcomputing.com/top-5-myths-about-lims-interfacing.aspx.
2 Tom Stafford and Matt Webb, Mind Hacks: Tips & Tools for Using Your Brain, First Edition. (O’Reilly Media, Inc., 2004), 129-134, http://www.mindhacks.com.
3 Mounira Khoury, Leslie Burnett, and Mark A. Mackay, “Error rates in Australian chemical pathology laboratories,” The Medical Journal of Australia, no. 165 (1996): 128-130.
4 R Black, P Woolman, and J Kinsella, “Variation in the transcription of laboratory data in an intensive care unit. R. Black. 2004; Anaesthesia – Wiley InterScience,” Anaesthesia 59, no. 8 (July 19, 2004): 767-769.
5 Siri H. Segalstad, International IT Regulations and Compliance: Quality Standards in the Pharmaceutical and Regulated Industries, First Edition. (West Sussex, England: John Wiley and Sons, Ltd, 2008), http://wiley.com.