Generating realistic data through modeling and parametric probability for the numerical evaluation of data processing algorithms in two-dimensional chromatography

Comprehensive two-dimensional chromatography generates complex data sets, and numerous baseline correction and noise removal algorithms have been proposed in the past decade to address this challenge. However, evaluating their performance objectively is currently not possible due to a lack of object...

Full description

Saved in:
Bibliographic Details
Published inAnalytica chimica acta Vol. 1312; p. 342724
Main Authors Milani, Nino B.L., García-Cicourel, Alan Rodrigo, Blomberg, Jan, Edam, Rob, Samanipour, Saer, Bos, Tijmen S., Pirok, Bob W.J.
Format Journal Article
LanguageEnglish
Published Netherlands Elsevier B.V 11.07.2024
Subjects
Online AccessGet full text
ISSN0003-2670
1873-4324
1873-4324
DOI10.1016/j.aca.2024.342724

Cover

More Information
Summary:Comprehensive two-dimensional chromatography generates complex data sets, and numerous baseline correction and noise removal algorithms have been proposed in the past decade to address this challenge. However, evaluating their performance objectively is currently not possible due to a lack of objective data. To tackle this issue, we introduce a versatile platform that models and reconstructs single-trace two-dimensional chromatography data, preserving peak parameters. This approach balances real experimental data with synthetic data for precise comparisons. We achieve this by employing a Skewed Lorentz-Normal model to represent each peak and creating probability distributions for relevant parameter sampling. The model's performance has been showcased through its application to two-dimensional gas chromatography data where it has created a data set with 458 peaks with an RMSE of 0.0048 or lower and minimal residuals compared to the original data. Additionally, the same process has been shown in liquid chromatography data. Data analysis is an integral component of any analytical method. The development of new data processing strategies is of paramount importance to tackle the complex signals generated by state-of-the-art separation technology. Through the use of probability distributions, quantitative assessment of algorithm performance of new algorithms is now possible. Therefore, creating new opportunities for faster, more accurate, and simpler data analysis development. [Display omitted] •Benchmark data is needed for objective evaluation of data-processing algorithms.•A Skewed Lorenz-Normal distribution is applied to describe chromatographic peaks.•A tool was developed to generate highly realistic chromatographic data.•The simulation of realistic data is demonstrated on LC × LC and GC × GC signals.•This tool may facilitate further the development of data analysis workflows.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0003-2670
1873-4324
1873-4324
DOI:10.1016/j.aca.2024.342724