ObjectiveTo establish the role of high-fidelity simulation training to test the efficacy and safety of the electronic health record (EHR)–user interface within the intensive care unit (ICU) environment.DesignProspective pilot study.SettingMedical ICU in an academic medical centre.ParticipantsPostgraduate medical trainees.InterventionsA 5-day-simulated ICU patient was developed in the EHR including labs, hourly vitals, medication administration, ventilator settings, nursing and notes. Fourteen medical issues requiring recognition and subsequent changes in management were included. Issues were chosen based on their frequency of occurrence within the ICU and their ability to test different aspects of the EHR–user interface. ICU residents, blinded to the presence of medical errors within the case, were provided a sign-out and given 10 min to review the case in the EHR. They then presented the case with their management suggestions to an attending physician. Participants were graded on the number of issues identified. All participants were provided with immediate feedback upon completion of the simulation.Primary and secondary outcomesTo determine the frequency of error recognition in an EHR simulation. To determine factors associated with improved performance in the simulation.Results38 participants including 9 interns, 10 residents and 19 fellows were tested. The average error recognition rate was 41% (range 6–73%), which increased slightly with the level of training (35%, 41% and 50% for interns, residents, and fellows, respectively). Over-sedation was the least-recognised error (16%); poor glycemic control was most often recognised (68%). Only 32% of the participants recognised inappropriate antibiotic dosing. Performance correlated with the total number of screens used (p=0.03).ConclusionsDespite development of comprehensive EHRs, there remain significant gaps in identifying dangerous medical management issues. This gap remains despite high levels of medical training, suggesting that EHR-specific training may be beneficial. Simulation provides a novel tool in order to both identify these gaps as well as foster EHR-specific training.
During interprofessional intensive care unit (ICU) rounds each member of the interprofessional team is responsible for gathering and interpreting information from the electronic health records (EHR) to facilitate effective team decision-making. This study was conducted to determine how each professional group reviews EHR data in preparation for rounds and their ability to identify patient safety issues. Twenty-five physicians, 29 nurses, and 20 pharmacists participated. Individual participants were given verbal and written sign-out and then asked to review a simulated record in our institution's EHR, which contained 14 patient safety items. After reviewing the chart, subjects presented the patient and the number of safety items recognised was recorded. About 40%, 30%, and 26% of safety issues were recognised by physicians, nurses, and pharmacists, respectively (p = 0.0006) and no item recognised 100% of the time. There was little overlap between the three groups with only 50% of items predicted to be recognised 100% of the time by the team. Differential recognition was associated with marked differences in EHR use, with only 3/152 EHR screens utilised by all three groups and the majority of screens used exclusively only by one group. There were significant and non-overlapping differences in individual profession recognition of patient safety issues in the EHR. Preferential identification of safety issues by certain professional groups may be attributed to differences in EHR use. Future studies will be needed to determine if shared decision-making during rounds can improve recognition of safety issues.
Background With the widespread adoption of electronic health records (EHRs), there is a growing awareness of problems in EHR training for new users and subsequent problems with the quality of information present in EHR-generated progress notes. By standardizing the case, simulation allows for the discovery of EHR patterns of use as well as a modality to aid in EHR training.
BackgroundThe increasing adoption of electronic health records (EHRs) has been associated with a number of unintended negative consequences with provider efficiency and job satisfaction. To address this, there has been a dramatic increase in the use of medical scribes to perform many of the required EHR functions. Despite this rapid growth, little has been published on the training or assessment tools to appraise the safety and efficacy of scribe-related EHR activities. Given the number of reports documenting that other professional groups suffer from a number of performance errors in EHR interface and data gathering, scribes likely face similar challenges. This highlights the need for new assessment tools for medical scribes.ObjectiveThe objective of this study was to develop a virtual video-based simulation to demonstrate and quantify the variability and accuracy of scribes’ transcribed notes in the EHR.MethodsFrom a pool of 8 scribes in one department, a total of 5 female scribes, intent on pursuing careers in health care, with at least 6 months of experience were recruited for our simulation study. We created three simulated patient-provider scenarios. Each scenario contained a corresponding medical record in our simulation instance of our EHR. For each scenario, we video-recorded a standardized patient-provider encounter. Five scribes with at least 6 months of experience both with our EHR and in the specialty of the simulated cases were recruited. Each scribe watched the simulated encounter and transcribed notes into a simulated EHR environment. Transcribed notes were evaluated for interscribe variability and compared with a gold standard for accuracy.ResultsAll scribes completed all simulated cases. There was significant interscribe variability in note structure and content. Overall, only 26% of all data elements were unique to the scribe writing them. The term data element was used to define the individual pieces of data that scribes perceived from the simulation. Note length was determined by counting the number of words varied by 31%, 37%, and 57% between longest and shortest note between the three cases, and word economy ranged between 23% and 71%. Overall, there was a wide inter- and intrascribe variation in accuracy for each section of the notes with ranges from 50% to 76%, resulting in an overall positive predictive value for each note between 38% and 81%.ConclusionsWe created a high-fidelity, video-based EHR simulation, capable of assessing multiple performance indicators in medical scribes. In this cohort, we demonstrate significant variability both in terms of structure and accuracy in clinical documentation. This form of simulation can provide a valuable tool for future development of scribe curriculum and assessment of competency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.