Background Checklists have been shown to improve performance of complex, error-prone processes. To develop a checklist with potential to reduce the likelihood of diagnostic error for patients presenting to the Emergency Room (ER) with undiagnosed conditions. Methods Participants included 15 staff ER physicians working in two large academic centers. A rapid cycle design and evaluation process was used to develop a general checklist for high-risk situations vulnerable to diagnostic error. Physicians used the general checklists and a set of symptom-specific checklists for a period of 2 months. We conducted a mixed methods evaluation that included interviews regarding user perceptions and quantitative assessment of resource utilization before and after checklist use. Results A general checklist was developed iteratively by obtaining feedback from users and subject matter experts, and was trialed along with a set of specific checklists in the ER. Both the general and the symptom-specific checklists were judged to be helpful, with a slight preference for using symptom-specific lists. Checklist use commonly prompted consideration of additional diagnostic possibilities, changed the working diagnosis in approximately 10% of cases, and anecdotally was thought to be helpful in avoiding diagnostic errors. Checklist use was prompted by a variety of different factors, not just diagnostic uncertainty. None of the physicians used the checklists in collaboration with the patient, despite being encouraged to do so. Checklist use did not prompt large changes in test ordering or consultation. Conclusions In the ER setting, checklists for diagnosis are helpful in considering additional diagnostic possibilities, thus having potential to prevent diagnostic errors. Inconsistent usage and using the checklists privately, instead of with the patient, are factors that may detract from obtaining maximum benefit. Further research is needed to optimize checklists for use in the ER, determine how to increase usage, to evaluate the impact of checklist utilization on error rates and patient outcomes, to determine how checklist usage affects test ordering and consultation, and to compare checklists generally with other approaches to reduce diagnostic error.
Background While the Accreditation Council for Graduate Medical Education recommends multisource feedback (MSF) of resident performance, there is no uniformly accepted MSF tool for emergency medicine (EM) trainees, and the process of obtaining MSF in EM residencies is untested. Objective To determine the feasibility of an MSF program and evaluate the intraclass and interclass correlation of a previously reported resident professionalism evaluation, the Humanism Scale (HS). Methods To assess 10 third-year EM residents, we distributed an anonymous 9-item modified HS (EM-HS) to emergency department nursing staff, faculty physicians, and patients. The evaluators rated resident performance on a 1 to 9 scale (needs improvement to outstanding). Residents were asked to complete a self-evaluation of performance, using the same scale. Analysis Generalizability coefficients (Eρ2) were used to assess the reliability within evaluator classes. The mean score for each of the 9 questions provided by each evaluator class was calculated for each resident. Correlation coefficients were used to evaluate correlation between rater classes for each question on the EM-HS. Eρ2 and correlation values greater than 0.70 were deemed acceptable. Results EM-HSs were obtained from 44 nurses and 12 faculty physicians. The residents had an average of 13 evaluations by emergency department patients. Reliability within faculty and nurses was acceptable, with Eρ2 of 0.79 and 0.83, respectively. Interclass reliability was good between faculty and nurses. Conclusions An MSF program for EM residents is feasible. Intraclass reliability was acceptable for faculty and nurses. However, reliable feedback from patients requires a larger number of patient evaluations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.