We read, with great interest, the article by Matthew et al, which explores the ability of medical students to receive feedback in a constructive manner, following a onehour training workshop. 1 As fifth year medical students in the UK, we have found emphasis in our teaching is placed upon giving feedback rather than on how to appropriately receive feedback. Therefore, we commend the authors for bringing this issue to light. We believe the ability to receive feedback in an engaged and constructive manner is vital for learning and the improvement of clinical skills.The method used by the authors is admirable; however, we feel this approach to training students in receiving feedback could be improved by implementing a selfreflective element. Self-reflection is an essential step in receiving feedback, as it is necessary for students to implement the feedback they receive and improve their clinical practice. 2 Furthermore, Gibbs highlights the importance of self-reflection upon receiving feedback: without adequate self-reflection following feedback, individuals leap to premature conclusions regarding what happened, rather than achieving a deeper understanding of the feedback given. 3 Medical schools utilise various models of self-reflection to equip students with a framework with which to reflect upon feedback. A self-reflective aspect could be incorporated through giving students a self-reflection template in the workshop, followed by a self-reflection task upon receiving their feedback.Matthew et al correctly identify a key limitation in their work: comparing scores in the objective structured teaching exercise (OSTE) was insufficient to draw valid conclusions regarding the effectiveness of the workshop. The OSTE rubric used offers immediate assessment of the students' ability to receive feedback in a manner in keeping with their workshop training. However, it fails to assess if the students have processed this information and will apply it to their practice. As previously discussed, self-reflection is vital in receiving and acting upon feedback. Criteria exist for objectively assessing the quality of self-reflection, for example, the ICSE criteria from the Royal College of General Practitioners. 4 We therefore suggest the use of a self-reflection exercise following each OSTE to evaluate the change in students' feedback-receiving ability.We disagree with the authors' proposal of applying their approach to postgraduate trainees. As students progress through medical school, their engagement with feedback improves due to factors such as their increasing clinical skills and knowledge. 5 Therefore, training of senior students and postgraduates in receiving
Pinilla et al explore the use of Educational Design Research to develop a Learning Management System (LMS), an online platform for self-regulated learning activities and clinical curriculum mapping. 1 As fifth year medical students in the UK, we see the importance of online platforms to provide a structured and comprehensive learning experience. The work of Pinilla et al is of particular relevance in the context of the COVID-19 pandemic, where ward-based clinical learning has been limited, resulting in a greater reliance on online teaching modalities. Furthermore, centralised learning resources can also bridge the discrepancies in teaching quality, which students may experience in different clinical settings.The authors' use of student satisfaction scores to evaluate the implementation of an LMS into their curriculum is a commendable approach to understanding students' views. However, we believe that assessing student satisfaction alone is insufficient to come to the conclusion that LMS can support student learning. Johnson et al explored both student satisfaction and academic performance to assess student learning following implementation of an LMS. 2 Therefore, further outcomes should be investigated to gain a better insight into the impact of LMS on students' learning. Furthermore, students may use an LMS in different ways to support their learning -this is valuable information to inform further development of an LMS. Back et al found 63% of the student cohort used their LMS to prepare for exams. 3 Thus, Pinilla et al might benefit from exploring in what manner LMS is being used to supplement students' learning.While the authors conclude student satisfaction was improved by the implementation of LMS, we feel the use of Likert scale-based surveys in isolation questions the credibility of this conclusion. Although Likert scales are widely used in research, respondents tend to answer in a more agreeable way to statements, thus demonstrating a susceptibility to acquiescence bias. 4 This is an inherent and often unavoidable flaw in their use. Moreover, Sullivan et al suggest that applying descriptive statistics, such as calculating a mean, to findings from Likert scales can result in ambiguous conclusions. 5 Pinilla et al report a significant increase in overall student satisfaction from 3.9 to 4.4. However, in the context of the Likert scale, these values seem arbitrary. We, therefore, suggest that the results from Likert scale
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.