2016
DOI: 10.1080/0142159x.2017.1248916
|View full text |Cite
|
Sign up to set email alerts
|

Hawks, Doves and Rasch decisions: Understanding the influence of different cycles of an OSCE on students’ scores using Many Facet Rasch Modeling

Abstract: Notes on contributors:Peter Yeates is a lecturer in medical education and consultant in acute and respiratory medicine. His research focuses on assessor variability and assessor cognition within health professionals' education.Stefanie S. Sebok-Syer is a postdoctoral Fellow at the Centre for Education Research and Innovation specializing in measurement, assessment, and evaluation. Her main interests include exploring the rating behaviour of assessors, particularly in high-stakes assessment contexts.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
23
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(26 citation statements)
references
References 25 publications
3
23
0
Order By: Relevance
“…There is a longstanding interest on the effect of examiners on standards in performance assessments like OSCEs (Bartman et al 2013;Downing 2005;Fuller et al 2017;Harasym et al 2008;Jefferies et al 2007;McManus et al 2006;Pell et al 2010;Yeates et al 2018;Yeates and Sebok-Syer 2017). Variation in examiner stringency is usually conceptualised and measured based on the scores (or grades) that examiners produce within stations-i.e.…”
Section: Examiner Stringency As An Effect On Scoresmentioning
confidence: 99%
“…There is a longstanding interest on the effect of examiners on standards in performance assessments like OSCEs (Bartman et al 2013;Downing 2005;Fuller et al 2017;Harasym et al 2008;Jefferies et al 2007;McManus et al 2006;Pell et al 2010;Yeates et al 2018;Yeates and Sebok-Syer 2017). Variation in examiner stringency is usually conceptualised and measured based on the scores (or grades) that examiners produce within stations-i.e.…”
Section: Examiner Stringency As An Effect On Scoresmentioning
confidence: 99%
“…Equally, as reliability is often influenced more by station specificity than by examiner variability, increasing the number of stations is likely to produce larger increases in reliability than examiner-focused approaches. 8 Conversely, many medical schools run OSCEs across multiple geographically dispersed sites, 18,46 in which the examiners at each site are drawn from clinicians who practise locally and who rarely interact with clinicians from other sites. In this (very common) instance it is reasonable to suggest that examiner cohorts could be systematically different in their practice norms and beliefs, the cohorts of trainees to whom they are exposed, their specialty mixes and their level of specialisation.…”
Section: Implications Of Findingsmentioning
confidence: 99%
“…They found that site differences variably explained between 1.5% and 17.1% of score variability. Yeates and Sebok‐Syer specifically addressed whether parallel examiner cohorts across different sites in the same medical school showed different standards of judgement. Their provisional results suggested that scores by different examiner cohorts differed by up to 4.4% of the assessment scale.…”
Section: Introductionmentioning
confidence: 99%
“…These studies from educational measurement provide early approaches for assessing skills such as collaboration in ways that capture aspects of both independent and interdependent dimensions of performance and characterise performance along a spectrum rather than creating a false dichotomy. Within medical education, a variation of the Rasch measurement model proposed by Wilson and colleagues has already been used to capture aspects of rater's collaborative performance . All of these aforementioned approaches require us to consider the collective in order to meaningfully assess authentic clinical performance.…”
Section: Discussionmentioning
confidence: 99%