Objective: The objective was to review and critically appraise the medical education literature pertaining to feedback and highlight influential papers that inform our current understanding of the role of feedback in medical education.Methods: A search of the English language literature in querying Education Resources Information Center (ERIC), PsychINFO, PubMed, and Scopus identified 327 feedback-related papers using either quantitative (hypothesis-testing or observational investigations of educational interventions), qualitative methods (exploring important phenomena in emergency medicine [EM] education), or review methods.Two reviewers independently screened each category of publications using previously established exclusion criteria. Six reviewers then independently scored the remaining 54 publications using a qualitative, quantitative, or review paper scoring system. Each scoring system consisted of nine criteria and used parallel scoring metrics that have been previously used in critical appraisals of education research.Results: Fifty-four feedback papers (25 quantitative studies, 24 qualitative studies, five review papers) met the a priori criteria for inclusion and were reviewed. Eight quantitative studies, nine qualitative studies, and three review papers were ranked highly by the reviewers and are summarized in this article. To promote a scholarly approach to education and provide ongoing professional development for EM educators, the CORD Academy proposed a critical appraisal series to explore important, timely, relevant education topics. This inaugural installment of the CORD Academy critical appraisal series addresses the topic of feedback in medical education. Despite widespread acknowledgement of the importance of feedback in improving learner performance, both learners and educators express dissatisfaction with the quality and quantity of feedback received in the ED, and recent literature suggests that even if feedback is delivered, factors related to learner confidence, emotion, learner-educator relationship, and culture may impact the likelihood that the feedback is received, incorporated, and translated into performance improvement. [8][9][10][11][12][13] This critical appraisal applies a previously published method to search, critically appraise, and summarize the top quantitative and qualitative papers on feedback in medical education. Conclusions14 Established scoring instruments for quantitative and qualitative papers were piloted and revised as necessary for implementation in this context, and an instrument for scoring review papers was adapted from the qualitative instrument, piloted, and revised as well. The aim of this critical appraisal is to provide a summary of the top scoring feedback in medical education papers, highlight practical implications for EM educators, and suggest important next steps for future research. METHODS Article IdentificationA research librarian performed the literature search, querying Education Resources Information Center (ERIC), PsychINFO, PubMed, ...
The Residency Review Committee in Emergency Medicine requires residency programs to deliver at least 5 hours of weekly didactics. Achieving at least a 70 % average attendance rate per resident is required for residency program accreditation, and is used as a benchmark for residency graduation in our program. We developed a web-based, asynchronous curriculum to replace 1 hour of synchronous didactics, and hypothesized that the curriculum would be feasible to implement, well received by learners, and improve conference participation. This paper describes the feasibility and learner acceptability of a longitudinal asynchronous curriculum, and describes its impact on postgraduate year-1(PGY-1) resident conference participation and annual in-training examination scores. Using formal curriculum design methods, we developed modules and paired assessment exercises to replace 1 hour of weekly didactics. We measured feasibility (development and implementation time and costs) and learner acceptability (measured on an anonymous survey). We compared pre- and post-intervention conference participation and in-service training examination scores using a two sample t test. The asynchronous curriculum proved feasible to develop and implement. PGY-1 resident conference participation improved compared to the pre-intervention year (85.6 vs. 62 %; 95 % CI 0.295-0.177; p < 0.001). We are unable to detect a difference between in-training examination results in either the PGY-1 group or across all residents by the introduction of this intervention. 18/31 (58 %) residents completed the post-intervention survey. 83 % reported satisfaction with curriculum changes. Strengths of the curriculum included clarity and timeliness of assignments. Weaknesses included technical difficulties with the online platform. Our curriculum is feasible to develop and implement. Despite technical difficulties, residents report high satisfaction with this new curriculum. Among PGY-1 residents there is improved conference participation compared to the prior year.
The Milestones Passport feedback intervention was feasible and acceptable to users; however, learner satisfaction with the Milestone assessment in the ED was modest.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.