HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.Computing normalised prediction distribution errors to evaluate nonlinear mixed-effect models: The npde add-on package for R.
For external model evaluation, prediction distribution errors are recommended when the aim is to use the model to simulate data. Metrics through hyperparameters should be preferred when the aim is to compare two populations and metrics based on the objective function are useful during the model building process.
Model evaluation is an important issue in population analyses. We aimed to perform a systematic review of all population pharmacokinetic and/or pharmacodynamic analyses published between 2002 and 2004 to survey the current methods used to evaluate models and to assess whether those models were adequately evaluated. We selected 324 articles in MEDLINE using defined key words and built a data abstraction form composed of a checklist of items to extract the relevant information from these articles with respect to model evaluation. In the data abstraction form, evaluation methods were divided into three subsections: basic internal methods (goodness-of-fit [GOF] plots, uncertainty in parameter estimates and model sensitivity), advanced internal methods (data splitting, resampling techniques and Monte Carlo simulations) and external model evaluation. Basic internal evaluation was the most frequently described method in the reports: 65% of the models involved GOF evaluation. Standard errors or confidence intervals were reported for 50% of fixed effects but only for 22% of random effects. Advanced internal methods were used in approximately 25% of models: data splitting was more often used than bootstrap and cross-validation; simulations were used in 6% of models to evaluate models by a visual predictive check or by a posterior predictive check. External evaluation was performed in only 7% of models. Using the subjective synthesis of model evaluation for each article, we judged the models to be adequately evaluated in 28% of pharmacokinetic models and 26% of pharmacodynamic models. Basic internal evaluation was preferred to more advanced methods, probably because the former is performed easily with most software. We also noticed that when the aim of modelling was predictive, advanced internal methods or more stringent methods were more often used.
To evaluate by simulation the statistical properties of normalized prediction distribution errors (NPDE), prediction discrepancies (pd), standardized prediction errors (SPE), numerical predictive check (NPC) and decorrelated NPC (NPC(dec)) for the external evaluation of a population pharmacokinetic analysis, and to illustrate the use of NPDE for the evaluation of covariate models. We assume that a model M(B) has been built using a building dataset B, and that a separate validation dataset, V is available. Our null hypothesis H(0) is that the data in V can be described by M(B). We use several methods to test this hypothesis: NPDE, pd, SPE, NPC and NPC(dec). First, we evaluated by simulation the type I error under H(0) of different tests applied to the four methods. We also propose and evaluate a single global test combining normality, mean and variance tests applied to NPDE, pd and SPE. We perform tests on NPC and NPC(dec), after a decorrelation. M(B) was a one compartment model with first order absorption (without covariate), previously developed from two phase II and one phase III studies of the antidiabetic drug, gliclazide. We simulated 500 external datasets according to the design of a phase III study. Second, we investigated the application of NPDE to covariate models. We propose two approaches: the first approach uses correlation tests or mean comparisons to test the relationship between NPDE and covariates; the second evaluates NPDE split by category for discrete covariates or quantiles for continuous covariates. We generated several validation datasets under H(0) and under alternative assumptions with a model without covariate, with one continuous covariate (weight), or one categorical covariate (sex). We calculated the powers of the different tests using simulations, where the covariates of the phase III study were used. The simulations under H(0) show a high type I error for the different tests applied to SPE and an increased type I error for pd. The different tests present a type I error close to 5% for the global test appied to NPDE. We find a type I error higher than 5% for the test applied to classical NPC but this test becomes close to 5% for NPC(dec). For covariate models, when model and validation dataset are consistent, type I error of the tests are close to 5% for both effects. When validation datasets and models are not consistent, the tests detect the correlation between NPDE and the covariate. We recommend to use NPDE over SPE for external model evaluation, since they do not depend on an approximation of the model and have good statistical properties. NPDE represent a better approach than NPC, since in order to perform tests on NPC, a decorrelation step must be applied before. NPDE, in this illustration, is also a good tool to evaluate model with or without covariates.
What is already known about this subject • The reviews already published on population pharmacokinetic/pharmacodynamic (PK/PD) analyses have focused on theory and have presented some clinical applications, evaluated validation practices in limited circumstances, defined the interest and sometimes the complexity of this approach in drug development or proposed a list of relevant articles.• None of them has exhaustively evaluated published analyses and more precisely the model-building steps.• In view of the statistical complexity of population PK/PD methodology, more attention is required to how models are built and how they are reported in the literature. What this study adds• With a strict methodology and by establishing a standardized tool, this survey provides an exhaustive, objective and up-to-date review of model-building practices.• It reveals deficiencies in information reporting in most articles and the genuine need for guidance in publishing.• An initial, minimal list of items is suggested, which can be used by authors and reviewers in pharmacology journals.• The value of published peer-reviewed papers could be greatly improved if authors were to address the suggested list of items systematically. MethodsWe selected 324 articles in Pubmed using defined keywords. A data abstraction form (DAF) was then built comprising two parts: general characteristics including article identification, context of the analysis, description of clinical studies from which the data arose, and model building, including description of the processes of modelling. The papers were examined by two readers, who extracted the relevant information and transmitted it directly to a MySQL database, from which descriptive statistical analysis was performed. ResultsMost published papers concerned patients with severe pathology and therapeutic classes suffering from narrow therapeutic index and/or high PK/PD variability. Most of the time, modelling was performed for descriptive purposes, with rich rather than sparse data and using NONMEM software. PK and PD models were rarely complex (one or two compartments for PK; Emax for PD models). Covariate testing was frequently performed and essentially based on the likelihood ratio test. Based on a minimal list of items that should systematically be found in a population PK-PD analysis, it was found that only 39% and 8.5% of the PK and PD analyses, respectively, published from 2002 to 2004 provided sufficient detail to support the model-building methodology. ConclusionsThis survey allowed an efficient description of recent published population analyses, but also revealed deficiencies in reporting information on model building.
Background: Since 2007, it is mandatory for the pharmaceutical companies to submit a Paediatric Investigation Plan to the Paediatric Committee at the European Medicines Agency for any drug in development in adults, and it often leads to the need to conduct a pharmacokinetic study in children. Pharmacokinetic studies in children raise ethical and methodological issues. Because of limitation of sampling times, appropriate methods, such as the population approach, are necessary for analysis of the pharmacokinetic data. The choice of the pharmacokinetic sampling design has an important impact on the precision of population parameter estimates. Approaches for design evaluation and optimization based on the evaluation of the Fisher information matrix (M F ) have been proposed and are now implemented in several software packages, such as PFIM in R.Objectives: The objectives of this work were to (i) develop a joint population pharmacokinetic model to describe the pharmacokinetic characteristics of a drug S and its active metabolite in children after intravenous drug administration from simulated plasma concentration-time data produced using physiologically based pharmacokinetic (PBPK) predictions; (ii) optimize the pharmacokinetic sampling times for an upcoming clinical study using a multi-response design approach, considering clinical constraints; and iii) evaluate the resulting design taking data below the lower limit of quantification (BLQ) into account.Methods: Plasma concentration-time profiles were simulated in children using a PBPK model previously developed with the software SIMCYP ® for the parent drug and its active metabolite. Data were analysed using non-linear mixed-effect models with the software NONMEM ® , using a joint model for the parent drug and its metabolite. The population pharmacokinetic design, for the future study in 82 children from 2 to 18 years old, each receiving a single dose of the drug, was then optimized using PFIM, assuming identical times for parent and metabolite concentration measurements and considering clinical constraints. Design evaluation was based on the relative standard errors (RSEs) of the parameters of interest. In the final evaluation of the proposed design, an approach was used to assess the possible effect of BLQ concentrations on the design efficiency. This approach consists of rescaling the M F , using, at each sampling time, the probability of observing a concentration BLQ computed from Monte-Carlo simulations.Results: A joint pharmacokinetic model with three compartments for the parent drug and one for its active metabolite, with random effects on four parameters, was used to fit the simulated PBPK concentration-time data. A combined error model best described the residual variability. Parameters and dose were expressed per kilogram of bodyweight. Reaching a compromise between PFIM results and clinical constraints, the optimal design was composed of four samples at 0.1, 1.8, 5 and 10 h after drug injection. This design predicted RSE lower than 30 % for the fo...
Our results do not support an association between MDR1 genetic polymorphisms and modelled IDV clearance or clinical response to HAART.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.