Scores were found to have high reliability and demonstrated significant differences in performance by year of training. This provides evidence for the validity of using scores achieved on an OSCE as markers of progress in learners at different levels of training. Future studies will focus on assessing individual progress on the OSCE over time.
OBJECTIVES Currently, a 'pedagogical gap' exists in distributed medical education in that distance educators teach medical students but typically do not have the opportunity to assess them in large-scale examinations such as the objective structured clinical examination (OSCE). We developed a remote examiner OSCE (reOSCE) that was integrated into a traditional OSCE to establish whether remote examination technology may be used to bridge this gap. The purpose of this study was to explore whether remote physician-examiners can replace on-site physician-examiners in an OSCE, and to determine the feasibility of this new examination method.METHODS Forty Year 3 medical students were randomised into six reOSCE stations that were incorporated into two tracks of a 10-station traditional OSCE. For the reOSCE stations, student performance was assessed by both a local examiner (LE) in the room and a remote examiner (RE) who viewed the OSCE encounters from a distance. The primary endpoint was the correlation of scores between LEs and REs across all reOSCE stations. The secondary endpoint was a post-OSCE survey of both REs and students.RESULTS Statistically significant correlations were found between LE and RE checklist scores for history taking (r = 0.64-r = 0.80), physical examination (r = 0.41-r = 0.54), and management stations (r = 0.78). Correlations between LE and RE global ratings were more varied (r = 0.21-r = 0.77). Correlations on three of the six stations reached significance. Qualitative analysis of feedback from REs and students showed high acceptance of the reOSCE despite technological issues.CONCLUSIONS This preliminary study demonstrated that OSCE ratings by LEs and REs were reasonably comparable when using checklists. Remote examination may be a feasible and acceptable way of assessing students' clinical skills, but further validity evidence will be required before it can be recommended for use in high-stakes examinations.
This study again indicated that transfusion knowledge is poor among internal medicine trainees and that this does not improve with increasing number of years of training. Innovative strategies for transfusion education are urgently needed and should be rigorously assessed for efficacy.
Progress tests are one potential solution to the problem of removing (or at least lessening) the sting associated with assessment. If implemented with careful thought and consideration, progress tests can be used to support the type of deep, meaningful and continuous learning that we are trying to instill in our learners.
With the recent interest in competency-based education, educators are being challenged to develop more assessment opportunities. As such, there is increased demand for exam content development, which can be a very labor-intense process. An innovative solution to this challenge has been the use of automatic item generation (AIG) to develop multiple-choice questions (MCQs). In AIG, computer technology is used to generate test items from cognitive models (i.e. representations of the knowledge and skills that are required to solve a problem). The main advantage yielded by AIG is the efficiency in generating items. Although technology for AIG relies on a linear programming approach, the same principles can also be used to improve traditional committee-based processes used in the development of MCQs. Using this approach, content experts deconstruct their clinical reasoning process to develop a cognitive model which, in turn, is used to create MCQs. This approach is appealing because it: (1) is efficient; (2) has been shown to produce items with psychometric properties comparable to those generated using a traditional approach; and (3) can be used to assess higher order skills (i.e. application of knowledge). The purpose of this article is to provide a novel framework for the development of high-quality MCQs using cognitive models.
hand-off?)).tw,kf. (4) 2 (forward adj (fed or feed?)).tw,kf. (13) 3 1 or 2 [LEARNER HANDOVER/FORWARD FEEDING] (17) 4 Employee Performance Appraisal/ (4651) 5 Educational Measurement/ (35169) 6 4 or 5 (39662) 7 (apprais* or assess* or evaluat* or judge* or judging or measur* or rate or rated or rates or rating or score or scored or scores or scoring).tw,kf. (8698684) Clinical Clerkship/ (4709) (apprentic* or clerkship?).tw,kf. (6457) "Internship and Residency"/ (44954) (intern or interns*).tw,kf. (7800) (medical adj3 residen*).tw,kf. (6025) (residency or residencies).tw,kf. (24972) Preceptorship/ (4818) (preceptorship? or practicum? or field training or in-field training).tw,kf. (2457) exp Professional Competence/ (108623) competen*.tw,kf. (110004) Students, Medical/ (29524) medical student?.tw,kf. (35252) learner?.tw,kf. (11756) student?.tw,kf. (245210) trainee?.tw,kf. (21215) or/8-22 [MEDICAL STUDENTS/PROFESSIONAL COMPETENCY] (487912) 7 and 23 (237964) 6 or 24 [APPRAISAL/ASSESSMENT OF COMPETENCY] (259807) ((earlier or former or preceding or prior or past or previous*) adj5 (apprais* or assess* or evaluat* or judge* or judging or measur* or opinion*)).tw,kf. (77746) Supplemental digital content for Humphrey-Murto S, LeBlanc A, Touchie C, et al. The influence of prior performance information on ratings of current performance and implications for learner handover: A scoping review. Acad Med.
Background The entrustable professional activity (EPA) framework has been identified as a useful approach to assessment in competency-based education. To apply an EPA framework for assessment, essential skills necessary for entrustment to occur must first be identified.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.