2020
DOI: 10.1111/medu.14357
|View full text |Cite
|
Sign up to set email alerts
|

Examiners’ decision‐making processes in observation‐based clinical examinations

Abstract: Background Objective structured clinical examinations (OSCEs) are commonly used to assess the clinical skills of health professional students. Examiner judgement is one acknowledged source of variation in candidate marks. This paper reports an exploration of examiner decision making to better characterise the cognitive processes and workload associated with making judgements of clinical performance in exit‐level OSCEs. Methods Fifty‐five examiners for exit‐level OSCEs at five Australian medical schools complet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 67 publications
(80 reference statements)
0
10
0
Order By: Relevance
“…1 ). It seems there is little specific literature in this area, and it might be valuable to further investigate qualitatively the different processes of how and why examiners might award grades and scores differently (Malau-Aduli et al, 2021 ; Tavares & Eva, 2013 ). Under examinee-centred methods of standard setting (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…1 ). It seems there is little specific literature in this area, and it might be valuable to further investigate qualitatively the different processes of how and why examiners might award grades and scores differently (Malau-Aduli et al, 2021 ; Tavares & Eva, 2013 ). Under examinee-centred methods of standard setting (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…This mental "shortcut" is faster and reduces cognitive complexity when working memory is overloaded by time pressure or increased complexity and is also more likely in experienced assessors who recognize patterns more quickly (14,16,17). Applying this concept to our previous study, assessors appeared to use a representativeness heuristic to consider "how much does the observed clinical performance of a senior medical student compare with what I expect of a 'prototypical' intern" (12)? Such representational heuristics may be influenced by assessors' roles and experiences, contrast effects, use of inference, working memory effects, different interpretations of behaviors, predisposition to consider a particular perspective (e.g., of the learner or patient), different pre-existing frames of reference, exposure to different learner cohorts, and the examiners' own clinical skills and perceptions of task difficulty (18)(19)(20)(21).…”
Section: Introductionmentioning
confidence: 95%
“…This highlights a need for greater understanding of the cognitive processes of clinical assessors to inform strategies that enhance fair and robust judgements. Our previous research showed that judging candidate performance is complex, cognitively challenging and mentally demanding, particularly when borderline performance is observed in an "exit" Objective Structured Clinical Examination (OSCE) (12). In this "grey" zone of candidate performance, assessors used academic institutional marking criteria as a "safety blanket" to guide judgement, but also used additional criteria that were not necessarily explicit in the marking sheet, based on professional expectations (candidate demeanor and patient safety).…”
Section: Introductionmentioning
confidence: 99%
“…In this context, assessor judgements are guided by prescribed expectations and scoring criteria, provided by assessment academics, outlined on the OSCE sheet. The judgements offered are not ‘objective’, however, as my colleagues and I have noted that the process of making judgements about student performance involves combining analytical and affective elements with an intuitive drive to rate candidates against a personal construct of a ‘prototypical intern’ 7 . In making such a comparison, the specific, time‐limited, and pre‐scripted nature of OSCE encounters may require assessors to adapt their expectations of learner performance to take into account situational constraints.…”
mentioning
confidence: 99%
“…The judgements offered are not 'objective', however, as my colleagues and I have noted that the process of making judgements about student performance involves combining analytical and affective elements with an intuitive drive to rate candidates against a personal construct of a 'prototypical intern'. 7 In making such a comparison, the specific, time-limited, and pre-scripted nature of OSCE encounters may require assessors to adapt their expectations of learner performance to take into account situational constraints. In a follow-up study, 8 we explored OSCE assessors' perceptions of the 'prototypical' construct by applying a theoretical framework (Cultural Historical Activity Theory [CHAT]) that enabled examination of the complexity inherent in making assessment decisions.…”
mentioning
confidence: 99%