This paper describes the development and evaluation of training intended to enhance students' performance on their first live-animal ovariohysterectomy (OVH). Cognitive task analysis informed a seven-page lab manual, 30-minute video, and 46-item OVH checklist (categorized into nine surgery components and three phases of surgery). We compared two spay simulator models (higher-fidelity silicone versus lower-fidelity cloth and foam). Third-year veterinary students were randomly assigned to a training intervention: lab manual and video only; lab manual, video, and $675 silicone-based model; lab manual, video, and $64 cloth and foam model. We then assessed transfer of training to a live-animal OVH. Chi-square analyses determined statistically significant differences between the interventions on four of nine surgery components, all three phases of surgery, and overall score. Odds ratio analyses indicated that training with a spay model improved the odds of attaining an excellent or good rating on 25 of 46 checklist items, six of nine surgery components, all three phases of surgery, and the overall score. Odds ratio analyses comparing the spay models indicated an advantage for the $675 silicon-based model on only 6 of 46 checklist items, three of nine surgery components, and one phase of surgery. Training with a spay model improved performance when compared to training with a manual and video only. Results suggested that training with a lower-fidelity/cost model might be as effective when compared to a higher-fidelity/cost model. Further research is required to investigate simulator fidelity and costs on transfer of training to the operational environment.
Objective: To compare a low-fidelity foam and fabric (FF) model to a high fidelity silicone (SI) model for teaching canine celiotomy closure. Study design: Prospective blinded comparison of learning outcomes. Sample population: Second-year veterinary students who had never performed surgery as a primary surgeon (n = 46) and veterinarians experienced in performing canine celiotomy (n = 10). Methods: Veterinary students performed a digitally recorded celiotomy closure on a canine cadaver before and after participation in 4 facilitated laboratory training sessions on their randomly assigned model. Recordings were scored by masked, trained educators with an 8-item task-specific rubric. Students completed surveys evaluating the models. Experienced veterinarians tested the models and provided feedback on their features. Results: Completed pretest and posttest recordings were available for 38 of 46 students. Students' performance improved regardless of the model used to practice (P = .04). The magnitude of improvement did not differ between the 2 groups (P = .10). All students (n = 46) described their models favorably. Ninety percent of veterinarians thought both models were helpful for training students and gave similar ratings on all measures except for realism, which was rated higher for the SI model's skin (median, agree) compared with the FF model (median, neutral, P = .02). Conclusion: Model-based training was effective at improving students' surgical skills. Less experienced learners achieved similar skill gains after practicing with FF or SI models. Clinical significance: The acquisition of surgical skills required to perform celiotomy closure in companion animals occurs similarly well on models made of foam and fabric or of silicone, providing flexibility in model selection.
Evaluation of veterinary students' surgical skills by using digital recordings with a validated rubric improves flexibility when designing accurate assessments.
The Objective Structured Clinical Examination (OSCE) is a valid, reliable assessment of veterinary students’ clinical skills that requires significant examiner training and scoring time. This article seeks to investigate the utility of implementing video recording by scoring OSCEs in real-time using live examiners, and afterwards using video examiners from within and outside the learners’ home institution. Using checklists, learners (n=33) were assessed by one live examiner and five video examiners on three OSCE stations: suturing, arthrocentesis, and thoracocentesis. When stations were considered collectively, there was no difference between pass/fail outcome between live and video examiners (χ2 = 0.37, p = .55). However, when considered individually, stations (χ2 = 16.64, p < .001) and interaction between station and type of examiner (χ2 = 7.13, p = .03) demonstrated a significant effect on pass/fail outcome. Specifically, learners being assessed on suturing with a video examiner had increased odds of passing the station as compared with their arthrocentesis or thoracocentesis stations. Internal consistency was fair to moderate (0.34–0.45). Inter-rater reliability measures varied but were mostly moderate to strong (0.56–0.82). Video examiners spent longer assessing learners than live raters (mean of 21 min/learner vs. 13 min/learner). Station-specific differences among video examiners may be due to intermittent visibility issues during video capture. Overall, video recording learner performances appears reliable and feasible, although there were time, cost, and technical issues that may limit its routine use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.