2010
DOI: 10.5688/aj740344
|View full text |Cite
|
Sign up to set email alerts
|

Scoring Objective Structured Clinical Examinations Using Video Monitors or Video Recordings

Abstract: Objective. To compare scoring methods for objective structured clinical examinations (OSCEs) using real-time observations via video monitors and observation of videotapes. Methods. Second-(P2) and third-year (P3) doctor of pharmacy (PharmD) students completed 3-station OSCEs. Sixty encounters, 30 from each PharmD class, were selected at random, and scored by faculty investigators observing video monitors in real-time. One month later, the encounters were scored by investigators using videotapes. Results. Intra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 7 publications
(17 reference statements)
0
12
0
Order By: Relevance
“…The authors point out that the range was similar to previously published interrater-reliability scores of live raters [ 23 ]. A second study with pharmacy students specifically studied the intra-rater reliability after one month [ 24 ]. The reliability was high; however, due to a higher stringency in the video rating, more candidates would have failed in the post-hoc assessment.…”
Section: Discussionmentioning
confidence: 99%
“…The authors point out that the range was similar to previously published interrater-reliability scores of live raters [ 23 ]. A second study with pharmacy students specifically studied the intra-rater reliability after one month [ 24 ]. The reliability was high; however, due to a higher stringency in the video rating, more candidates would have failed in the post-hoc assessment.…”
Section: Discussionmentioning
confidence: 99%
“…Second, the time and method of observation differed between the faculty evaluators and the PIs, which may have inadvertently impacted the scores. Other studies have found real‐time and video observations may not be interchangeable . The PIs scored the students in the 5 minutes after the case, whereas the faculty observers had as much time as they needed to view the videos and evaluate the performance.…”
Section: Limitationsmentioning
confidence: 99%
“…In the pharmacy education literature, there is an example of using an ICC to determine the degree of agreement between the analytical checklist scores obtained in 2 different conditions (real-time and video). 24 The ICC was 0.951, which the authors interpreted as high agreement (values of less than 0.4 indicated poor agreement; between 0.4 and 0.8, fair to good agreement; and greater than 0.8, excellent agreement). An example of Cohen kappa can be found in an analysis of letter grades from the 2008-2009 academic year using 2 independent faculty evaluations representing categorical-level data.…”
Section: Inter-rater Reliabilitymentioning
confidence: 99%
“…The researchers reported high rubric reliability (r50.98) and recommended that several grades be adjusted to eliminate evaluator leniency, although they concluded that evaluator leniency appeared minimal. 24 …”
Section: Rasch Analysismentioning
confidence: 99%
See 1 more Smart Citation