
Chromatography Insights within TetraScience’s Data & AI Workspace: a cloud dashboard that aggregates CDS outputs and flags performance drift. [Credit: TetraScience slide deck, 2025.]
Edge, who has held a variety of leadership roles at the UK Chromatographic Society (currently president), and George Van Den Driessche, Ph.D., a scientific‑data specialist at the firm, have both seen these bottlenecks firsthand in and around pharma labs. At TetraScience, their approach involves extracting raw chromatograms from every major CDS and mapping them to a single, vendor‑agnostic schema. Internal pilot data shared with R&D World indicates that these unified workflows reduced out‑of‑spec (OOS) deviations by as much as 75% and cut SOP violations by roughly 80% by automatically flagging repeat injections and excessive manual processing.

Anthony (Tony) Edge, Ph.D.
Such gains are possible because the core problem relates to the very architecture of traditional data systems. Tackling that Tower of Babel problem is the first step.
Van Den Driessche explained that chromatography data systems have proprietary formats for how they’re extracting and storing data, and you can’t get that data out of the system. Because of this, he noted, scientists often have to manually curate data in separate tools like an ELN or Excel; even if point‑to‑point connectors are used, these must be managed individually and are easily broken. Van Den Driessche emphasized that the goal is to free these raw traces and reshape them into what he describes as a “highly engineered data table” — an analytics-optimized, “AI-ready data set.”
Some labs treat “columns as throwaway technology,” Edge said, noting they might be discarded after only 300–400 injections when they could last for 500, 600, or even 1000. Because degradation trends often stay hidden, “teams often repeat work needlessly by swapping columns or rerunning assays instead of fixing root causes — an expensive habit.” Promising alternatives are emerging. For example, one TetraScience pilot program reportedly trimmed column swaps after dashboards flagged which cartridges were truly worn out, enabling a shift from scheduled replacements to condition-based maintenance. In addition, Edge highlighted the potential for AI models to predict column end-of-life, a capability that could significantly stretch cartridge use and reduce consumable costs by ensuring columns are used to their full potential.
When spreadsheets stall production
While working with a Tetrascience customer, Edge learned of an entire fermenter sit idle for almost a week after the central QC lab sent back suspect chromatograms. To prove the readings were wrong, personnel at the customer site undertook the painstaking process to “laboriously pull out data” from numerous injections in Excel, ultimately showing that the lab was, as he put it, “dishing out duff data.” The stoppage, eventually traced to a minor settings error, could have been spotted much sooner, he noted, if retention‑time and other core metrics had been monitored automatically across sites.

George Van Den Driessche, Ph.D.
George Van Den Driessche elaborated on system usage and hardware costs. A full LC/HPLC stack, which comprises a pump, autosampler, column oven, and detector, can represent a significant investment, with costs that “can range from a couple hundred thousand up to a million dollars,” he said. “And when you buy a new instrument, you have to keep an old version of that instrument on deck as a backup.” Because column‑performance data and instrument‑utilization data often sit in separate CDS silos, managers sometimes resort to buying extra rigs as insurance. This can lead to situations where, as Van Den Driessche described, “you have a backup to the backup,” and at some sites, even multiple tiers of redundant equipment: “backups to backups to backups that are sitting there collecting dust.” Without the benefit of “advanced modern analytics” to track usage, he noted, labs cannot easily determine which systems are busy and which sit idle.
A check engine light for chromatography columns?
When asked about the possibility of a rough equivalent to a ‘check engine light’ for chromatography systems, Edge embraced the concept, envisioning AI-driven warnings based on harmonized data. Machine learning could automatically detect when a column was starting to fail. Such a system, he explained, would not only preserve data integrity by halting analysis on a compromised column but also optimize consumable use. Instead of treating columns as “throwaway technology” discarded after a set number of injections, labs could use them to their full potential while safeguarding accuracy.
This type of data-driven optimization is central to Edge’s broader vision for reducing laboratory waste. Edge suggested that the path to avoiding such waste lies not primarily in acquiring new generations of hardware like detectors or pumps, but in fundamentally improving how data is handled, ensuring it can be effectively moved, aligned, and understood across systems. Once column‑health metrics and run counts stream into a cloud platform, a lab can spot under‑used stacks, extend column lifetimes, and retire redundant instruments rather than parking another six‑figure insurance policy on the bench.
How the data gets out and what it looks like afterward
TetraScience starts by installing lightweight software “agents” next to every chromatography data system on the network. These agents grab raw files, converting the traces, metadata, and audit logs into a neutral, standardized format. Once in the cloud, a second pass adds crucial context the instruments never knew, such as project codes from an ELN, batch IDs from LIMS, or even reagent lots from an ERP. The result is an analytics-ready table that spans sites and instruments, making chromatogram data highly searchable. Consequently, a retention-time filter that once meant trawling folder hierarchies now takes just a few keystrokes.
Regulators value comprehensive audit trails, and the platform is designed to store every transform alongside the raw binary. This allows a reviewer to click from a peak area in a dashboard back to the original instrument file, such as an Empower or OpenLab file, in seconds. Edge argues that this streamlined lineage can shorten inspections. He explained that when an auditor raises questions about specific data, with such a system: “you can very quickly and simply load that up and then get to see it.”
Unlocking broader potential
While regulatory checks get faster, the larger prize is what unified data can do for research and development. “We have masses and masses of data,” Anthony Edge observed. Chromatography already accounts for “well over half of the dollars spent on analytical science,” yet with most of those traces sitting in “different silos,” scientists are often left to “look at little snippets of information, rather than the whole big picture.” Edge argued that unifying those traces would “start to unravel some of the mysteries we’ve presented ourselves.”
With the core data now transformed into what he termed an “AI-ready data set”—structured, shareable, and ready for advanced analysis. “You can even start leveraging this data for some of the points, like Tony was describing, where you’re pulling out the peak area that might correlate to the binding potency of a particular drug,” Van Den Driessche said. This AI-ready data is also linked to the ELN, which holds the experimental design, ensuring the assay’s target and the molecule used are immediately clear. “You have additional metadata with that,” he said, “that you can use all of that to build out a predictive model that tells you molecules targeting this protein will have X or Y binding potency so that you can increase your R&D efficiency of picking molecules with historical data backing their activities.”
Bottom line: Chromatography data has been whispering for decades; giving it a common language might finally let the lab hear it.