This paper describes the development and evaluation of training intended to enhance students' performance on their first live-animal ovariohysterectomy (OVH). Cognitive task analysis informed a seven-page lab manual, 30-minute video, and 46-item OVH checklist (categorized into nine surgery components and three phases of surgery). We compared two spay simulator models (higher-fidelity silicone versus lower-fidelity cloth and foam). Third-year veterinary students were randomly assigned to a training intervention: lab manual and video only; lab manual, video, and $675 silicone-based model; lab manual, video, and $64 cloth and foam model. We then assessed transfer of training to a live-animal OVH. Chi-square analyses determined statistically significant differences between the interventions on four of nine surgery components, all three phases of surgery, and overall score. Odds ratio analyses indicated that training with a spay model improved the odds of attaining an excellent or good rating on 25 of 46 checklist items, six of nine surgery components, all three phases of surgery, and the overall score. Odds ratio analyses comparing the spay models indicated an advantage for the $675 silicon-based model on only 6 of 46 checklist items, three of nine surgery components, and one phase of surgery. Training with a spay model improved performance when compared to training with a manual and video only. Results suggested that training with a lower-fidelity/cost model might be as effective when compared to a higher-fidelity/cost model. Further research is required to investigate simulator fidelity and costs on transfer of training to the operational environment.
IMPORTANCE National organizations recommend that medical schools train students in the social determinants of health. OBJECTIVE To develop and evaluate a longitudinal health equity curriculum that was integrated into third-year clinical clerkships and provided experiential learning in partnership with community organizations. DESIGN, SETTING, AND PARTICIPANTS This longitudinal cohort study was conducted from June 2017 to October 2020 to evaluate the association of the curriculum with medical students' selfreported knowledge of social determinants of health and confidence working with underserved populations. Students from 1 large medical school in the southeastern US were included. Students in the class of 2019 and class of 2020 were surveyed at baseline (before the start of their third year), end of the third year, and graduation. The class of 2018 (No curriculum) was surveyed at graduation to serve as a control. Data analysis was conducted from June to September 2020. EXPOSURES The curriculum began with a health equity simulation followed by a series of modules. The class of 2019 participated in the simulation and piloted the initial 3 modules (pilot), and the class of 2020 participated in the simulation and the full 9 modules (full). MAIN OUTCOMES AND MEASURES A linear mixed-effects model was used to evaluate the change in the self-reported knowledge and confidence scores over time (potential scores ranged from 0 to 32, with higher scores indicating higher self-reported knowledge and confidence working with underserved populations). In secondary analyses, a Kruskal-Wallis test was conducted to compare graduation scores between the no, pilot, and full curriculum classes. RESULTS A total of 314 students (160 women [51.0%], 205 [65.3%] non-Hispanic White participants) completed at least 1 survey, including 125 students in the pilot, 121 in the full, and 68 in the no curriculum classes. One hundred forty-one students (44.9%) were interested in primary care. Total self-reported knowledge and confidence scores increased between baseline and end of clerkship (15.4 vs 23.7, P = .001) and baseline and graduation (15.4 vs 23.7, P = .001) for the pilot and full curriculum classes. Total scores at graduation were higher for the pilot curriculum (median, 24.0; interquartile range [IQR], 21.0-27.0; P = .001) and full curriculum classes (median, 23.0; IQR, 20.0-26.0; P = .01) compared with the no curriculum class (median, 20.5; IQR, 16.25-24.0). (continued) Key Points Question Is a longitudinal health equity curriculum associated with improved self-reported knowledge of the social determinants of health and confidence with working with underserved populations among US medical students? Findings In this cohort study of 314 students, self-reported knowledge and confidence scores significantly increased over time for participants in both the pilot and full curriculum classes. Compared with students not exposed to the curriculum, those in the pilot and the full curriculum classes had significantly higher scores at graduation. Me...
BackgroundThe reliability in Objective Structured Clinical Exams (OSCEs) is based on variance introduced due to examiners, stations, items, standardized patients (SP), and the interaction of one or more of these items with the candidates. The impact of SPs on the reliability has not been well studied. Accordingly, the main purpose of the present study was to assess the accuracy of portrayal by standardized patients.MethodsFour stations from a ten station high-stakes OSCE were selected for video recording. Due to the large number of candidates to be evaluated, the OSCE was administered using four assessment tracks. Four SPs were trained for each case (n = 16). Two physician assessors were trained to assess the accuracy of SP portrayal using a station-specific instrument based on the station guidelines. For the items with disagreement a third physician was asked to review and the mode was used for analysis. Each instrument included case-specific items on verbal and physical portrayal using a 3-point rating scale (“yes”, “yes, but” and “not done”). The physician assessors also scored each SP on their overall performance based on a 5-item anchored global rating scale (“very poor”, “poor”, “ok”, “good”, and “very good”). SPs at location 1 were trained by one trainer and SPs at location 2 had another trainer. All SPs were employed in a high-stakes OSCE for at least the second time.ResultsThe reliability of rating scores ranged from Cronbach’s alpha of .40 to .74. Verbal portrayal by SPs did not significantly differ for most items; however, the facial expressions of the SPs differed significantly (p < .05). An emergency management station that depended heavily on SPs physical presentation and facial expressions differed between all four SPs trained for that station.ConclusionsVariation of trained SP portrayal of the same station across different tracks and at different times in OSCE may contribute substantial error to OSCE assessments. The training of SPs should be strengthened and constantly monitored during the exam to ensure that the examinees’ scores are a true reflection of their competency and devoid of exam errors.
Current teaching approaches in human and veterinary medicine across North America, Europe, and Australia include lectures, group discussions, feedback, role-play, and web-based training. Increasing class sizes, changing learning preferences, and economic and logistical challenges are influencing the design and delivery of communication skills in veterinary undergraduate education. The study's objectives were to (1) assess the effectiveness of small-group and web-based methods for teaching communication skills and (2) identify which training method is more effective in helping students to develop communication skills. At the Ross University School of Veterinary Medicine (RUSVM), 96 students were randomly assigned to one of three groups (control, web, or small-group training) in a pre-intervention and post-intervention group design. An Objective Structured Clinical Examination (OSCE) was used to measure communication competence within and across the intervention and control groups. Reliability of the OSCEs was determined by generalizability theory to be 0.65 (pre-intervention OSCE) and 0.70 (post-intervention OSCE). Study results showed that (1) small-group training was the most effective teaching approach in enhancing communication skills and resulted in students scoring significantly higher on the post-intervention OSCE compared to the web-based and control groups, (2) web-based training resulted in significant though considerably smaller improvement in skills than small-group training, and (3) the control group demonstrated the lowest mean difference between the pre-intervention/post-intervention OSCE scores, reinforcing the need to teach communication skills. Furthermore, small-group training had a significant effect in improving skills derived from the initial phase of the consultation and skills related to giving information and planning.
The DVM program at the University of Calgary offers a Clinical Skills course each year for the first three years. The course is designed to teach students the procedural skills required for entry-level general veterinary practice. Objective Structured Clinical Examinations (OSCEs) were used to assess students' performance on these procedural skills. A series of three OSCEs were developed for the first year. Content was determined by an exam blueprint, exam scoring sheets were created, rater training was provided, a mock OSCE was performed with faculty and staff, and the criterion-referencing Ebel method was used to set cut scores for each station using two content experts. Each station and the overall exam were graded as pass or fail. Thirty first-year DVM students were assessed. Content validity was ensured by the exam blueprint and expert review. Reliability (coefficient α) of the stations from the three OSCE exams ranged from 0.0 to 0.71. The three exam reliabilities (Generalizability Theory) were, for OSCE 1, G=0.56; OSCE 2, G=0.37; and OSCE 3, G=0.32. Preliminary analysis has suggested that the OSCEs demonstrate face and content validity, and certain stations demonstrated adequate reliability. Overall exam reliability was low, which reflects issues with first-time exam delivery. Because this year was the first that this course was taught and this exam format was used, work continues in the program on the teaching of the procedural skills and the development and revision of OSCE stations and scoring checklists.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.