2020
DOI: 10.4300/jgme-d-19-00533.1
|View full text |Cite
|
Sign up to set email alerts
|

E-ASSESS: Creating an EPA Assessment Tool for Structured Simulated Emergency Scenarios

Abstract: Background The entrustable professional activity (EPA) assessment framework allows supervisors to assign entrustment levels to physician trainees for specific activities. Limited opportunity for direct observation of trainees hampers entrustment decisions, in particular for infrequently performed activities. Simulation allows for direct observation, so tools to assess performance of EPAs in simulation could potentially provide additional data to complement clinical assessments. Objective We developed and colle… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…Our participants' multiple meanings of entrustment and variable benchmarks when assessing EPAs align with the rater cognition literature, which describes significant idiosyncrasy and variable frames of reference among raters in medical education. [15][16][17][18] Prior studies have also demonstrated that entrustment scores vary more between raters than when the same raters score clinical or communication skills on an objective structured clinical exam, 11 that entrustment scores from SBAs and WBAs do not necessarily correlate, 12 and that there is cognitive variability in how raters interpret trainee performance when using an entrustment scale 36 -all findings that may be explained by rater idiosyncrasy or a lack of construct clarity. Although our participants' variable interpretations of entrustment go against the view that entrustment is intuitive, 3,4 their lack of sustained hands-on rater training could have contributed to the observed variability.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Our participants' multiple meanings of entrustment and variable benchmarks when assessing EPAs align with the rater cognition literature, which describes significant idiosyncrasy and variable frames of reference among raters in medical education. [15][16][17][18] Prior studies have also demonstrated that entrustment scores vary more between raters than when the same raters score clinical or communication skills on an objective structured clinical exam, 11 that entrustment scores from SBAs and WBAs do not necessarily correlate, 12 and that there is cognitive variability in how raters interpret trainee performance when using an entrustment scale 36 -all findings that may be explained by rater idiosyncrasy or a lack of construct clarity. Although our participants' variable interpretations of entrustment go against the view that entrustment is intuitive, 3,4 their lack of sustained hands-on rater training could have contributed to the observed variability.…”
Section: Discussionmentioning
confidence: 99%
“…Key questions include what influence these assessment settings may have on raters’ cognitive processes when assessing EPAs and how this influence affects how data from WBAs and SBAs can be used to make decisions in CBME programs. 14 Despite these unique challenges, many competence committees currently consider EPA assessments from WBAs and SBAs in their decision making, sometimes interchangeably, 12,27 a practice we argue needs appraisal.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Cultural change is needed, starting with integrated learning, a challenge still under construction (Kayani, Gilani and Mahboob, 2018). The organization of the assessment system is as fundamental as the development of methods applicable to simulation stations (virtual or real), and to the assessment in the workplace with feedback (Prins, Brondt and Malling, 2019;Andler et al, 2020).…”
Section: Tip 10mentioning
confidence: 99%
“…[1][2][3] In this issue of the Journal of Graduate Medical Education, Andler and colleagues present validity evidence for leveraging the simulation context to provide assessment data for entrustable professional activities (EPAs). 4 Unfortunately, they found their validity argument hampered by an unexpected finding: despite good interrater reliability for entrustmentbased simulation assessment ratings and fair interrater reliability for similar entrustment-based clinical practice ratings, there were no correlations between them. The authors ponder possible explanations for this troublesome finding and suggest that since there was only ''fair agreement at best'' for some of the behaviors, rater variability might be an explanation for the lack of correlations.…”
mentioning
confidence: 99%