BackgroundPatient safety (PS) is influenced by a set of factors on various levels of the healthcare system. Therefore, a systems-level approach and systems thinking is required to understand and improve PS. The use of e-learning may help to develop a systems thinking approach in medical students, as case studies featuring audiovisual media can be used to visualize systemic relationships in organizations. The goal of this quasi-experimental study was to determine if an e-learning can be utilized to improve systems thinking, knowledge, and attitudes towards PS.MethodsA quasi-experimental, longitudinal within- subjects design was employed. Participants were 321 third-year medical students who received online surveys before and after they participated in an e-learning course on PS. Primary outcome measures where levels of systems thinking and attitudes towards PS. Secondary outcome measures were the improvement of PS specific knowledge through the e-learning course.ResultsLevels of systems thinking showed significant improvement (58.72 vs. 61.27; p < .001) after the e-learning. Student’s attitudes towards patient safety improved in several dimensions: After the course, students rated the influence of fatigue on safety higher (6.23 vs. 6.42, p < .01), considered patient empowerment more important (5.16 vs. 5.93, p < .001) and realized more often that human error is inevitable (5.75 vs. 5.97, p < .05). Knowledge on PS improved from 36.27 % correct answers before to 76.45 % after the e-learning (p < .001).ConclusionsOur results suggest that e-learning can be used to teach PS. Attitudes towards PS improved on several dimensions. Furthermore, we were able to demonstrate that a specifically designed e-learning program can foster the development of conceptual frameworks such as systems thinking, which facilitates the understanding of complex socio-technical systems within healthcare organisations.
BackgroundEvaluation is an integral part of medical education. Despite a wide use of various evaluation tools, little is known about student perceptions regarding the purpose and desired consequences of evaluation. Such knowledge is important to facilitate interpretation of evaluation results. The aims of this study were to elicit student views on the purpose of evaluation, indicators of teaching quality, evaluation tools and possible consequences drawn from evaluation data.MethodsThis qualitative study involved 17 undergraduate medical students in Years 3 and 4 participating in 3 focus group interviews. Content analysis was conducted by two different researchers.ResultsEvaluation was viewed as a means to facilitate improvements within medical education. Teaching quality was believed to be dependent on content, process, teacher and student characteristics as well as learning outcome, with an emphasis on the latter. Students preferred online evaluations over paper-and-pencil forms and suggested circulating results among all faculty and students. Students strongly favoured the allocation of rewards and incentives for good teaching to individual teachers.ConclusionsIn addition to assessing structural aspects of teaching, evaluation tools need to adequately address learning outcome. The use of reliable and valid evaluation methods is a prerequisite for resource allocation to individual teachers based on evaluation results.
BackgroundThe objective of this study is to compare two different instructional methods in the curricular use of computerized virtual patients in undergraduate medical education. We aim to investigate whether using many short and focused cases – the key feature principle – is more effective for the learning of clinical reasoning skills than using few long and systematic cases.MethodsWe conducted a quasi-randomized, non-blinded, controlled parallel-group intervention trial in a large medical school in Southwestern Germany. During two seminar sessions, fourth- and fifth-year medical students (n = 56) worked on the differential diagnosis of the acute abdomen. The educational tool – virtual patients – was the same, but the instructional method differed: In one trial arm, students worked on multiple short cases, with the instruction being focused only on important elements (“key feature arm”, n = 30). In the other trial arm, students worked on few long cases, with the instruction being comprehensive and systematic (“systematic arm”, n = 26). The overall training time was the same in both arms. The students’ clinical reasoning capacity was measured by a specifically developed instrument, a script concordance test. Their motivation and the perceived effectiveness of the instruction were assessed using a structured evaluation questionnaire.ResultsUpon completion of the script concordance test with a reference score of 80 points and a standard deviation of 5 for experts, students in the key feature arm attained a mean of 57.4 points (95% confidence interval: 50.9–63.9), and in the systematic arm, 62.7 points (57.2–68.2), with Cohen’s d at 0.337. The difference is statistically non-significant (p = 0.214). In the evaluation survey, students in the key feature arm indicated that they experienced more time pressure and perceived the material as more difficult.ConclusionsIn this study powered for a medium effect, we could not provide empirical evidence for the hypothesis that a key feature-based instruction on multiple short cases is superior to a systematic instruction on few long cases in the curricular implementation of virtual patients. The results of the evaluation survey suggest that learners should be given enough time to work through case examples, and that caution should be taken to prevent cognitive overload.Electronic supplementary materialThe online version of this article (10.1186/s12909-017-1004-2) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.