ObjectivesCrew resource management (CRM) training formats have become a popular method to increase patient safety by consideration of the role that human factors play in healthcare delivery. The purposes of this review were to identify what is subsumed under the label of CRM in a healthcare context and to determine how such training is delivered and evaluated.DesignSystematic review of published literature.Data sourcesPubMed, PsycINFO and ERIC were searched through 8 October 2018.Eligibility criteria for selecting studiesIndividually constructed interventions for healthcare staff that were labelled as CRM training, or described as based on CRM principles or on aviation-derived human factors training. Only studies reporting both an intervention and results were included.Data extraction and synthesisThe studies were examined and coded for relevant passages. Characteristics regarding intervention design, training conditions and evaluation methods were analysed and summarised both qualitatively and quantitatively.ResultsSixty-one interventions were included. 48% did not explain any keyword of their CRM intervention to a reproducible detail. Operating room teams and surgery, emergency medicine, intensive care unit staff and anaesthesiology came in contact most with a majority of the CRM interventions delivered in a 1-day or half-day format. Trainer qualification is reported seldomly. Evaluation methods and levels display strong variation.ConclusionsCritical topics were identified for the CRM training community and include the following: the need to agree on common terms and definitions for CRM in healthcare, standards of good practice for reporting CRM interventions and their effects, as well as the need for more research to establish non-educational criteria for success in the implementation of CRM in healthcare organisations.
Diagnostic efficiency is an important outcome variable in clinical reasoning research as it corresponds to workplace challenges. Scaffolding for case representations significantly improved the diagnostic efficiency of fourth and fifth-year medical students, most likely because of a more targeted screening of the available information.
Introduction: Clinical reasoning has been fostered with varying case formats including the use of virtual patients. Existing literature points to different conclusions regarding which format is most beneficial for learners with diverse levels of prior knowledge. We designed our study to better understand which case format affects clinical reasoning outcomes and cognitive load, dependent on medical students' prior knowledge. Methods: Overall, 142 medical students (3 rd to 6 th year) were randomly assigned to either a whole case or serial cue case format. Participants worked on eight virtual patients in their respective case format. Outcomes included diagnostic accuracy, knowledge, and cognitive load. Results: We found no effect of case format on strategic knowledge scores pre-vs post-test (whole case learning gain = 3, 95% CI.-.01 to .01, serial cue learning gain = 3, 95% CI.-.06 to .00 p = .50). In both case formats, students with high baseline knowledge (determined by median split on the pre-test in conceptual knowledge) benefitted from learning with virtual patients (learning gain in strategic knowledge = 5, 95% CI .03 to .09, p = .01) while students with low prior knowledge did not (learning gain = 0, 95%CI −.02 to .02). We found no difference in diagnostic accuracy between experimental conditions (difference = .44, 95% CI −.96 to .08, p = .22), but diagnostic accuracy was higher for students with high prior knowledge compared to those with low prior knowledge (difference = .8, 95% CI 0.31 to 1.35, p < .01). Students with low prior knowledge experienced higher extraneous cognitive load than students with high prior knowledge (multiple measurements, p < .01). Conclusions: The whole case and serial cue case formats alone did not affect students' knowledge gain or diagnostic accuracy. Students with lower knowledge experienced increased cognitive load and appear to have learned less from their interaction with virtual patients. Cognitive load should be taken into account when attempting to help students learn clinical reasoning with virtual patients, especially for students with lower knowledge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.