2010
DOI: 10.1097/ede.0b013e3181c30fb2
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the Performance of Prediction Models

Abstract: The performance of prediction models can be assessed using a variety of different methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. Several new measures have recently been proposed that can be seen as refinements of discrimination measures, including v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

11
1,859
2
7

Year Published

2011
2011
2018
2018

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 3,549 publications
(1,879 citation statements)
references
References 64 publications
(82 reference statements)
11
1,859
2
7
Order By: Relevance
“…We assessed the predictive performance of each model by means of discrimination and calibration for the outcomes sPTB <37 and <34 weeks of gestation, as described in the framework reported by Steyerberg et al. 16. Discrimination indicates the ability of the model to distinguish between women who will have a sPTB and those who will not.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We assessed the predictive performance of each model by means of discrimination and calibration for the outcomes sPTB <37 and <34 weeks of gestation, as described in the framework reported by Steyerberg et al. 16. Discrimination indicates the ability of the model to distinguish between women who will have a sPTB and those who will not.…”
Section: Methodsmentioning
confidence: 99%
“…Discriminative performance (AUC) of the models is not affected, as this recalibration method does not change the ranking of the predicted probabilities 24. A discriminative performance below 0.70 is generally considered moderate 16.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…AUROC is mathematically equivalent to the c statistic, which denotes the proportion of all possible pairs of patients drawn from the population, one a survivor and one a non‐survivor, where the patient who survived had the higher P s. AUROC with 95% confidence intervals (95% CI) for all models were compared, and non‐overlapping 95% CIs were considered a significant difference in discrimination ability. We calculated the discrimination slope with 95% CI for each model, i.e., the absolute difference between the mean predicted probability of survival ( P s) for survivors and for non‐survivors 24. We also calculated the median P s with 95% CI for survivors and for non‐survivors, as the distributions of P s values were highly skewed.…”
Section: Methodsmentioning
confidence: 99%
“…Since the issue of calibration may be fixable by recalibration,23 as seen in some of our cohorts, discrimination ability is essential for risk prediction 24. In our study, the ability of TRS2°P to discriminate the risk of subsequent cardiovascular outcomes among patients with MI was reasonably good.…”
Section: Discussionmentioning
confidence: 59%