Exploring Transformer Model in Longitudinal Pharmacokinetic/Pharmacodynamic Analyses and Comparing with Alternative Natural Language Processing Models

•The work showed the promising capabilities of the transformer model in capturing the longitudinal PK/PD relationships.•The work provides a systematic exploration and comparison of the currently published NLP models in this field.•Findings in these cutting-edge machine learning approaches are antici...

Full description

Saved in:
Bibliographic Details
Published inJournal of pharmaceutical sciences Vol. 113; no. 5; pp. 1368 - 1375
Main Authors Cheng, Yiming, Hu, Hongxiang, Dong, Xin, Hao, Xiaoran, Li, Yan
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.05.2024
Subjects
Online AccessGet full text
ISSN0022-3549
1520-6017
1520-6017
DOI10.1016/j.xphs.2024.02.008

Cover

More Information
Summary:•The work showed the promising capabilities of the transformer model in capturing the longitudinal PK/PD relationships.•The work provides a systematic exploration and comparison of the currently published NLP models in this field.•Findings in these cutting-edge machine learning approaches are anticipated to streamline future drug development. There remains a substantial need for a comprehensive assessment of various natural language processing (NLP) algorithms in longitudinal pharmacokinetic/pharmacodynamic (PK/PD) modeling despite recent advances in machine learning in the space of quantitative pharmacology. We herein investigated the application of the transformer model and further compared the performance among several different NLP models, including long short-term memory (LSTM) and neural-ODE (Ordinary Differential Equation) in analyzing longitudinal PK/PD data using virtual data containing three different regimens. Results suggested that LSTM and neural-ODE, along with their respective variants provide a strong performance when predicting from training-included (seen) regimens, albeit with slight information loss for training-excluded (unseen) regimens. Similarly, as with neural-ODE, the transformer exhibited superior performance in describing time-series PK/PD data. Nonetheless, when extrapolating to unseen regimens, while outlining the general data trends, it encountered difficulties in precisely capturing data fluctuations. Remarkably, a small integration of unseen data into the training dataset significantly bolsters predictive performance for both seen and unseen regimens. Our study marks a pioneering effort in deploying the transformer model for time-series PK/PD analysis and provides a systematic exploration of the currently available NLP models in this field.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0022-3549
1520-6017
1520-6017
DOI:10.1016/j.xphs.2024.02.008