2009
DOI: 10.1007/s10928-009-9143-7
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of different tests based on observations for external model evaluation of population analyses

Abstract: To evaluate by simulation the statistical properties of normalized prediction distribution errors (NPDE), prediction discrepancies (pd), standardized prediction errors (SPE), numerical predictive check (NPC) and decorrelated NPC (NPC(dec)) for the external evaluation of a population pharmacokinetic analysis, and to illustrate the use of NPDE for the evaluation of covariate models. We assume that a model M(B) has been built using a building dataset B, and that a separate validation dataset, V is available. Our … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
66
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 77 publications
(74 citation statements)
references
References 11 publications
4
66
0
Order By: Relevance
“…A total of 2,000 bootstrap data sets were generated from the original data set by repeated sampling with replacement, and the final pharmacokinetic model was used to estimate the model parameters for each data set. In addition, the final pharmacokinetic model was assessed using an internal evaluation procedure by computing the normalized prediction distribution errors (NPDE) of 5,000 simulated data sets compared to those of the observed data set (16,17).…”
Section: Methodsmentioning
confidence: 99%
“…A total of 2,000 bootstrap data sets were generated from the original data set by repeated sampling with replacement, and the final pharmacokinetic model was used to estimate the model parameters for each data set. In addition, the final pharmacokinetic model was assessed using an internal evaluation procedure by computing the normalized prediction distribution errors (NPDE) of 5,000 simulated data sets compared to those of the observed data set (16,17).…”
Section: Methodsmentioning
confidence: 99%
“…For WRES and CWRES, residuals may not be homoscedastic and centred on zero as a consequence of the approximation of the model. This approximation can be very crude for WRES, which makes WRES a very poor diagnostic tool, as has also been reported by previous studies (9,11,14). By construction, PWRES are expected to be homoscedastic and centred on zero; however, the non-normality of the data and their dependency within subjects might create artificial patterns (e.g.…”
Section: Discussionmentioning
confidence: 86%
“…The problem is that no classical test can be applied due to the data dependency within subjects, e.g. a major increase in type I errors (around 13%) was reported for the exact binomial test applied to VPCs (14).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For the internal model evaluation before the addition of the genetic effect, we performed visual predictive check plots where the 90% confidence intervals around the 5th, 50th, and 95th prediction percentiles from 250 simulated datasets were overlaid to the 5th, 50th, and 95th percentiles of the observed data binned using the theoretical sampling times (27). Then after the addition of the genetic effect, we computed the normalized prediction distribution errors (npde), i.e., the observation percentiles within the empirical distribution obtained from the model simulations, decorrelated and normalized using the inverse function of the normal cumulative density function (28).…”
Section: Model Evaluationmentioning
confidence: 99%