Non-Bayesian knowledge propagation using model-based analysis of data from multiple clinical studies

The ultimate goal in drug development is to establish the manner of safe and efficacious administration to patients. To achieve this in an efficient way the information contained in the clinical studies should contribute to the increasing pool of accumulated knowledge. The aim of this simulation stu...

Full description

Saved in:
Bibliographic Details
Published inJournal of pharmacokinetics and pharmacodynamics Vol. 35; no. 1; pp. 117 - 137
Main Authors Ribbing, Jakob, Hooker, Andrew C., Jonsson, E. Niclas
Format Journal Article
LanguageEnglish
Published Boston Springer US 01.02.2008
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1567-567X
1573-8744
1573-8744
DOI10.1007/s10928-007-9079-8

Cover

More Information
Summary:The ultimate goal in drug development is to establish the manner of safe and efficacious administration to patients. To achieve this in an efficient way the information contained in the clinical studies should contribute to the increasing pool of accumulated knowledge. The aim of this simulation study is to investigate different knowledge-propagation strategies when the data is analysed using a model-based approach in NONMEM. Pharmacokinetic studies were simulated according to several scenarios of the underlying model and study design, including a population-optimal design based on analysis of a previous study. Five approaches with different degrees of knowledge propagation were investigated: analysing the studies pooled into one dataset, merging the results from analysing the studies separately, fitting a pre-specified model that has been selected from a previous study on either the most recent study or on the pooled dataset, or naïvely analysing the most recent study without any regards to any previous study. The approaches were evaluated on what model was selected (qualitative knowledge, investigated by stepwise covariate selection within NONMEM) as well as parameter precision (quantitative knowledge) and predictive performance of the model. Pooling all studies into one dataset is the best approach for identifying the correct model and obtaining good predictive performance and merging the results of separate analyses may perform almost as well. Fitting a pre-specified model on new data is fast, without selection bias, and sanctioned for model-based confirmatory analyses. However, fitting the same pre-specified model to all available data is still fast and can be expected to perform better in terms of predictive performance than the unbiased alternative. Using ED-optimal design of sample times and stratification of subjects from different subgroups is a successful strategy which allows sparse sampling and handles prior parameter uncertainty.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
content type line 23
ISSN:1567-567X
1573-8744
1573-8744
DOI:10.1007/s10928-007-9079-8