2013
DOI: 10.1111/anae.12255
|View full text |Cite
|
Sign up to set email alerts
|

The relative reliability of actively participating and passively observing raters in a simulation‐based assessment for selection to specialty training in anaesthesia

Abstract: SummarySelection to specialty training is a high-stakes assessment demanding valuable consultant time. In one initial entry level and two higher level anaesthesia selection centres, we investigated the feasibility of using staff participating in simulation scenarios, rather than observing consultants, to rate candidate performance. We compared participant and observer scores using four different outcomes: inter-rater reliability; score distributions; correlation of candidate rankings; and percentage of candida… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 38 publications
0
5
0
Order By: Relevance
“…Implementing an SC as part of a process for selecting medical students may be logistically complex. It requires the recruitment and training of faculty raters, and ongoing collaboration among academic and professional institutions and experts in different operational aspects of the process (including simulation, evaluation and measurement) . Moreover, as SCs are based on a multi‐trait, multi‐method design, they may comprise a large number of elements in different combinations and orders, meaning that the process by which an SC is designed and administered may influence the utility of the method.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Implementing an SC as part of a process for selecting medical students may be logistically complex. It requires the recruitment and training of faculty raters, and ongoing collaboration among academic and professional institutions and experts in different operational aspects of the process (including simulation, evaluation and measurement) . Moreover, as SCs are based on a multi‐trait, multi‐method design, they may comprise a large number of elements in different combinations and orders, meaning that the process by which an SC is designed and administered may influence the utility of the method.…”
Section: Resultsmentioning
confidence: 99%
“…have shown that the SC method can be expensive compared with other selection methods (approximately US$300 per candidate) and represents a logistically complex option, although on balance they still advocate SCs for use in medical school selection. Roberts et al . investigated the feasibility of having health care staff participate in simulated scenarios as raters in order to minimise the human resources required to implement an SC.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…One article (Patterson et al 2014) was a qualitative study exploring competency models to improve uniformity and calibration of the overall process. Two articles described the Australian GP selection center process, two quantitatively (Roberts et al 2014;Patterson, Rowett, et al 2016) and one qualitatively (Burgess et al 2014), two describing a selection center approach into anesthetics training (Gale et al 2010;Roberts et al 2013), three describing the UK GP selection center approach (Mitchison 2009;Patterson, Baron, et al 2009;Patterson, Lievens, et al 2013) and a systematic review (Patterson, Knight, et al 2016).…”
Section: Selection Framework Based On Well-defined Criteria With Mulmentioning
confidence: 99%
“…Regardless of whether ratings are done virtually or in person, there are costs associated with the logistics and labor for each iteration. Further, rigorous training and frequent calibration is needed to obtain reliable and valid ratings, requiring additional time and resources (Cash, Hamre, Pianta, & Myers, 2012; Lievens, 2001; Roberts, Gale, Sice, & Anderson, 2013; Sugita, 2012). In addition to the labor costs associated with the ongoing use of human assessors, an automated scoring system may be expensive to initially develop (Bell et al, 2008), but once in place, it can be used an unlimited number of times with no increase in operating expenses (Luck et al, 2006).…”
Section: Why Replace Human Raters?mentioning
confidence: 99%