Laboratory Information Management Systems (LIMS) can become even more efficient with the introduction of one particular subs-system, Scientific Data Management System (SDMS). Below, we’ll give an overview of the SDMS, but first, let’s review the history of LIMS.
History of LIMS
Historically, LIMS has been considered the ERP of Quality Systems. It was – or in some cases still is – the 800-lb gorilla sitting in the middle of an organization’s quality domain, collecting thousands of data points month over month and year over year to make or assist in making those quality decisions. And its ability to assist in making quality decisions has earned it the “M” in the “management” system.
For a long period of time, a debate had brewed to find the justification for capturing these millions of data points that are collected in a LIMS system. The data, indeed, is needed for the quality operations, but the question really is, “Is it needed for making those quality decisions on a day-to-day basis?”
If you ask quality managers, they may in-turn ask you, “Where else would I go for this data?” This question has allowed a number of LIMS vendors to exploit the needs within the quality vertical of any organization by bundling the solutions, no matter how trivial, within LIMS. Not to mention, this has made LIMS – a mandatory quality system with a niche ability to interact with zillions of other systems and to hold specific data — irreplaceable for the longest time.
As a niche system, even IT verticals have refrained from replacing or touching it; hence, you would start to notice that the quality domain was always behind or lagging in IT transformations that were occurring in other domains.
As we moved forward and the nicheness of LIMS systems within quality domain was dissected by some of the IT and quality brains, the functionality yielded THE other systems, namely Electronic Laboratory Notebook (ELN) and Laboratory Execution Systems (LES). This was a significant leap forward in the thinking process centered around LIMS. It helped both Quality and IT domain to sneak – if not shatter – into the thick walls long held by a handful of LIMS vendors, forcing them to think in terms of breaking down the systems into chunks of sub-systems.
These sub-systems aren’t necessarily “small,” but rather more manageable and standalone, which allowed more vendors (particularly the smaller ones) a chance to play. Inevitably, there was resistance to change; these subsystems – no matter how small or independent – were shunned either by quality verticals, resistant to the perceived change required to replace that 800-lb gorilla, or big LIMS vendors who didn’t realize the need for change. Since then, quality managers have continued to drive the revolution of focusing the thought process on capturing data belonging to a quality process or subprocess in the system that makes most sense.
Scientific Data Management System
In this long list of sub-systems that were born out of need, you can’t discount the role played by the Scientific Data Management System (SDMS), a system that has long held its position as the “archive and restoration” system.
In its early days, Scientific Data Management Systems (SDMS) earned its name and reputation as a key system of quality domain because systems, like Chromatography data systems, that generate millions of data points tend to flood databases overnight. This demonstrated an imminent need to offload the data, not only to make room for new data, but to maintain the search capability or overall efficiency of the data retrieval/storage/update at a decent level. Since the sensitivity of the data requires quality domain to keep the data around for years prior to officially discarding, an application solution was required. And SDMS has truly served its purpose, especially by providing the in-compliance ability of archiving the data and its restoration.
The archiving and restoration functionality has grown in leaps and bounds in recent years, particularly regarding the ability to search the archived data at a blazing fast speed rather than going through restoration and using native application searching capabilities. Think what this has led to – a data reservoir capable of archiving data sourced from an application that can still provide its users with an independent way to search. If you can extrapolate this capability further, the data archived from heterogenous sources and the capability to search through it – imagine what you can do. The searching and archiving functionalities have expanded the capabilities of SDMS manifolds.
Heterogenous Data Reservoir
Since Scientific Data Management System (SDMS) is a heterogenous reservoir, it’s important to understand the inventory of systems that it can interact with or archive data from. In their current and future state, SDMS vendors are claiming that their system can interact with multiple systems, archive data on a pre-determined basis, and search data. Further, these search capabilities enable you to look up data from one system and fetch from another, eliminating the need of caching or capturing data from one system to another. SDMS may be repurposed to be considered as an intermediary data layer that can then feed data to other systems, when and if needed, instead of keeping data in multiple systems.
A typical practice for any QC vertical implementing LIMS involves integrations with both chromatographical system and several non-chromatographical instrumentals, ranging from simple balances to complex spectrometers. This can be a major headache for the quality divisions implementing LIMS if a vendor doesn’t provide OOB (out-of-box) integration.
Instead, LIMS vendors have enjoyed implementing these integrations each time in the form of services provided as part of their business model. Since the data from these instruments is an integral part of the data package for the QC decision-making machinery, it’s deemed as a mandatory requirement for the pharma industry, and they have no other choice but to accept them as services. Additionally, any changes to the instrument or its software that alter or affect the integration makes it further industry-responsible rather than vendor-responsible.
For SDMS to earn the acceptance from the industry, vendors have been enticing the QC vertical by putting the onus to support these integrations on themselves and relieving the clients from integration maintenance.
Meta-data Infor processor
In the above role as instrument integrator, it’s imperative that the source system typical of LIMS sends the batch or worklist information, including sample information, standards used with quantities, etc. Since the interface of the instrument is driven from the SDMS side, this information may be stored within SDMS. And once the results are retrieved from the instrument or chromatography data system, results may be further analyzed in the light of the meta data and comparison may be drawn across several runs.
Historically, LIMS is considered as the system of records for most quality data information, making it the gorilla in the middle of the lab operations that does the lifting and shifting of heavy-duty data required by all other systems. This makes operations hugely dependent on LIMS – and not to mention, it also makes your digital strategies more geared around or stuck with the LIMS system that’s been deployed for probably 5 or 10 years and may host gigabytes of data, making it hard to replace.
Going Forward with Scientific Data Management System
With the introduction of a system like Scientific Data Management System (SDMS), LIMS can be made a lot leaner, using it to keep record for your final QC result data for key decision-making and instead hosting your supporting data in systems like SDMS. Our team of LIMS consultants at Clarkston leverages their distinguished industry expertise to enable solutions tailored to meet the unique needs of your business and your long-term objectives.
Subscribe to Clarkston's Insights
Contributions by Vishal Sehgal