2019
DOI: 10.1111/anae.14818
|View full text |Cite
|
Sign up to set email alerts
|

Improving early warning scores – more data, better validation, the same response

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 15 publications
1
13
0
Order By: Relevance
“…We thank Mackay et al. for their comments on our editorial , which accompanied the article by Chiu et al. and agree with the definition of external validation that they provide.…”
supporting
confidence: 64%
See 1 more Smart Citation
“…We thank Mackay et al. for their comments on our editorial , which accompanied the article by Chiu et al. and agree with the definition of external validation that they provide.…”
supporting
confidence: 64%
“…
Improving early warning scoresmore data, better validation, the same response: a replyWe thank Mackay et al [1] for their comments on our editorial [2], which accompanied the article by Chiu et al [3] and agree with the definition of external validation that they provide. In broad terms, internal validation is concerned with reproducibility of a prediction model, whereas external validation is concerned with the transportability of model predictions to other settings and populations.
…”
mentioning
confidence: 63%
“…We thank Oglesby et al. for their interesting editorial accompanying our recent paper . We value their interpretation that a single standardised early warning score (EWS) may not be applicable to all types of patients and are grateful for their qualified support of the concept of future population‐specific EWS.…”
mentioning
confidence: 99%
“…Most prediction tools require further calibration in other populations and practice settings. For example, the performance of a clinical prediction tool can vary widely between specific patient populations, such as medical vs. surgical patients, or between surgical specialties . Previous analyses have suggested that the effect size estimates generated from large observational studies can be inflated and inaccurate , emphasising the need for prospective validation.…”
Section: Validationmentioning
confidence: 99%
“…Most prediction tools require further calibration in other populations and practice settings. For example, the performance of a clinical prediction tool can vary widely between specific patient populations, such as medical vs. surgical patients, or between surgical specialties [12].…”
Section: The Unweighted Clinical Prediction Tool Presented Bymentioning
confidence: 99%