Telesimulation is a new and innovative concept and process that has been used to provide education, training, and assessment in health-related fields such as medicine. This new area of simulation, and its terminology, has its origins within the past decade. The face validity and ability to provide the benefits of simulation education to learners at off-site locations has allowed the wide and rapid adoption of telesimulation in the field of medical education. Telesimulation has been implemented in areas such as pediatric resuscitation, surgery, emergency medicine, ultrasound-guided regional anesthesia in anesthesiology, nursing, and neurosurgery. However, its rapid expansion and current use has outgrown its recent description less than a decade ago. To date, there is no unifying definition of telesimulation that encompasses all the areas where it has been used while simultaneously allowing for growth and expansion in this field of study. This article has two main objectives. The first objective is to provide a comprehensive and unifying definition of telesimulation that encompasses all the areas where it has been used while allowing for growth and expansion in the field of study. The secondary objective is to describe the utility of telesimulation for emergency medicine educators in the context of the current evidence to serve as a background and framework that educators may use when considering creating educational programs that incorporate telecommunication and simulation resources. This article is complementary to the large group presentation where this new comprehensive and unifying definition was introduced to the simulation community at the International
Purpose A previous study found that milestone ratings at the end of training were higher for male than for female residents in emergency medicine (EM). However, that study was restricted to a sample of 8 EM residency programs and used individual faculty ratings from milestone reporting forms that were designed for use by the program’s Clinical Competency Committee (CCC). The objective of this study was to investigate whether similar results would be found when examining the entire national cohort of EM milestone ratings reported by programs after CCC consensus review. Method This study examined longitudinal milestone ratings for all EM residents (n = 1,363; 125 programs) reported to the Accreditation Council for Graduate Medical Education every 6 months from 2014 to 2017. A multilevel linear regression model was used to estimate differences in slope for all subcompetencies, and predicted marginal means between genders were compared at time of graduation. Results There were small but statistically significant differences between males’ and females’ increase in ratings from initial rating to graduation on 6 of the 22 subcompetencies. Marginal mean comparisons at time of graduation demonstrated gender effects for 4 patient care subcompetencies. For these subcompetencies, males were rated as performing better than females; differences ranged from 0.048 to 0.074 milestone ratings. Conclusions In this national dataset of EM resident milestone assessments by CCCs, males and females were rated similarly at the end of their training for the majority of subcompetencies. Statistically significant but small absolute differences were noted in 4 patient care subcompetencies.
Background Emergency medicine (EM) residency programs can provide up to 20% of their planned didactic experiences asynchronously through the Individualized Interactive Instruction (III) initiative. Although blogs and podcasts provide potential material for III content, programs often struggle with identifying quality online content.Objective To develop and implement a process to curate quality EM content on blogs and podcasts for resident education and III credit.Methods We developed the Approved Instructional Resources (AIR) Series on the Academic Life in Emergency Medicine website. Monthly, an editorial board identifies, peer reviews, and writes assessment questions for high-quality blog/podcast content. Eight educators rate each post using a standardized scoring instrument. Posts scoring 30 of 35 points are awarded an AIR badge and featured in the series. Enrolled residents can complete an assessment quiz for III credit. After 12 months of implementation, we report on program feasibility, enrollment rate, web analytics, and resident satisfaction scores.Results As of June 2015, 65 EM residency programs are enrolled in the AIR Series, and 2140 AIR quizzes have been completed. A total of 96% (2064 of 2140) of participants agree or strongly agree that the activity would improve their clinical competency, 98% (2098 of 2140) plan to use the AIR Series for III credit, and 97% (2077 of 2140) plan to use it again in the future. ConclusionsThe AIR Series is a national asynchronous EM curriculum featuring quality blogs and podcasts. It uses a national expert panel and novel scoring instrument to peer review web-based educational resources.
Objectives: Effective feedback is critical to medical education. Little is known about emergency medicine (EM) attending and resident physician perceptions of feedback. The focus of this study was to examine perceptions of the educational feedback that attending physicians give to residents in the clinical environment of the emergency department (ED). The authors compared attending and resident satisfaction with real-time feedback and hypothesized that the two groups would report different overall satisfaction with the feedback they currently give and receive in the ED.Methods: This observational study surveyed attending and resident physicians at 17 EM residency programs through web-based surveys. The primary outcome was overall satisfaction with feedback in the ED, ranked on a 10-point scale. Additional survey items addressed specific aspects of feedback. Responses were compared using a linear generalized estimating equation (GEE) model for overall satisfaction, a logistic GEE model for dichotomized responses, and an ordinal logistic GEE model for ordinal responses.Results: Three hundred seventy-three of 525 (71%) attending physicians and 356 of 596 (60%) residents completed the survey. Attending physicians were more satisfied with overall feedback (mean score 5.97 vs. 5.29, p < 0.001) and with timeliness of feedback (odds ratio [OR] = 1.56, 95% confidence interval [CI] = 1.23 to 2.00; p < 0.001) than residents. Attending physicians were also more likely to rate the quality of feedback as very good or excellent for positive feedback, constructive feedback, feedback on procedures, documentation, management of ED flow, and evidence-based decision-making. Attending physicians reported time constraints as the top obstacle to giving feedback and were more likely than residents to report that feedback is usually attending initiated (OR = 7.09, 95% CI = 3.53 to 14.31; p < 0.001).Conclusions: Attending physician satisfaction with the quality, timeliness, and frequency of feedback given is higher than resident physician satisfaction with feedback received. Attending and resident physicians have differing perceptions of who initiates feedback and how long it takes to provide effective feedback. Knowledge of these differences in perceptions about feedback may be used to direct future educational efforts to improve feedback in the ED.ACADEMIC EMERGENCY MEDICINE 2009; 16:S76-S81 ª
Background To further evolve in an evidence-based fashion, medical education needs to develop and evaluate new practices for teaching, learning, and assessment. However, educators face barriers in designing, conducting, and publishing education research. Objective To explore the barriers medical educators face in formulating, conducting, and publishing high-quality medical education research, and to identify strategies for overcoming them. Methods A consensus workshop was held November 5, 2013, at the Association of American Medical Colleges annual meeting. A working group of education research experts and educators completed a preconference literature review focusing on barriers to education research. During the workshop, consensus-based and small group techniques were used to refine the broad themes into content categories. Attendees then ranked the most important barriers and strategies for overcoming them with the highest potential impact. Results Barriers participants faced in conducting quality education research included lack of (1) expertise, (2) time, (3) funding, (4) mentorship, and (5) reward. The strategy considered most effective in overcoming these barriers involved building communities of education researchers for collaboration and networking, and advocating for education researchers' interests. Other suggestions included trying to secure increased funding opportunities, developing mentoring programs, and encouraging mechanisms to ensure protected time. Conclusions Barriers to education research productivity clearly exist. Many appear to result from feelings of isolation that may be overcome with systemic efforts to develop and enable communities of practice across institutions. Finally, the theme of “reward” is novel and complex and may have implications for education research productivity.
A trained cadre of medical education scholars with a focus on methodologically sound research techniques is needed to ensure development of innovations that can be translated to educational practice, rigorous evaluation of instructional strategies, and progress toward improving patient care outcomes. Most established educational programs are aimed at existing faculty members and focus primarily on the development of teaching and leadership skills. At the 2012 Academic Emergency Medicine (AEM) consensus conference, “Education Research in Emergency Medicine: Opportunities, Challenges, and Strategies for Success,” a breakout session was convened to develop training recommendations for postgraduate fellowship programs in medical education scholarship that would enable residency graduates to join academic faculties armed with the skills needed to perform research in medical education. Additionally, these graduates would enjoy the benefits of established mentorships. A group of 23 medical education experts collaborated to address the following objectives: 1) construct a formal needs assessment for fellowship training in medical education scholarship in emergency medicine (EM), 2) compare and contrast current education scholarship programs in both EM and non‐EM specialties, and 3) develop a set of core curriculum guidelines for specialized fellowship training in medical education scholarship in EM. Fellowship‐trained faculty need to be proficient in learner instruction and assessment, organizational leadership, curriculum development, educational methodology, and conducting generalizable hypothesis‐driven research to improve patient care.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.