Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
IntroductionPatient care simulations (PCS) and objective structured clinical examinations (OSCE) allow pharmacy students to practice communication. Feedback can help improve communication, but the impact over time is not well understood.ObjectiveThis study investigated the impact of a feedback strategy on pharmacy students' communication skills over three PCS. It also evaluated the alignment between students' self‐scoring and faculty scoring.MethodsPharmacy students participated in three sessions (PCS1, OSCE, and PCS3) that were focused on the affective domain. Individualized numerical and narrative feedback was provided to students on their performance after PCS1. Students' communication was scored by faculty graders out of an 18‐point validated rubric. Students self‐scored their communication with the same rubric. Faculty and student scores were compared using a linear mixed effects model, and an intraclass correlation coefficient was used to measure agreement.ResultsIn PCS1, 82 students scored an average of 15.41 ± 2.14 for faculty scores and 16.06 ± 1.55 for self‐graded scores (0.36, p < 0.001). In the OSCE, 81 students had an average of 15.93 ± 1.86 for faculty scores and 16.45 ± 1.35 for self‐graded scores (0.1, p = 0.18). In PCS3, 74 students scored an average of 15.22 ± 2.15 for faculty scores and 16.25 ± 1.44 for self‐graded scores (0.14, p = 0.08). A correlation between faculty and student scores was seen for PCS1. Over the three sessions, no significant differences were found between student self‐graded scores (p = 0.08), but faculty scores did differ, with the OSCE having higher scores than PCS3 (p < 0.01). Many students with faculty‐graded scores greater than 1 standard deviation below the mean scored themselves higher than faculty did.ConclusionFeedback after PCS1 did not significantly improve scores. Students with low faculty‐graded scores frequently scored themselves higher indicating low self‐awareness.
IntroductionPatient care simulations (PCS) and objective structured clinical examinations (OSCE) allow pharmacy students to practice communication. Feedback can help improve communication, but the impact over time is not well understood.ObjectiveThis study investigated the impact of a feedback strategy on pharmacy students' communication skills over three PCS. It also evaluated the alignment between students' self‐scoring and faculty scoring.MethodsPharmacy students participated in three sessions (PCS1, OSCE, and PCS3) that were focused on the affective domain. Individualized numerical and narrative feedback was provided to students on their performance after PCS1. Students' communication was scored by faculty graders out of an 18‐point validated rubric. Students self‐scored their communication with the same rubric. Faculty and student scores were compared using a linear mixed effects model, and an intraclass correlation coefficient was used to measure agreement.ResultsIn PCS1, 82 students scored an average of 15.41 ± 2.14 for faculty scores and 16.06 ± 1.55 for self‐graded scores (0.36, p < 0.001). In the OSCE, 81 students had an average of 15.93 ± 1.86 for faculty scores and 16.45 ± 1.35 for self‐graded scores (0.1, p = 0.18). In PCS3, 74 students scored an average of 15.22 ± 2.15 for faculty scores and 16.25 ± 1.44 for self‐graded scores (0.14, p = 0.08). A correlation between faculty and student scores was seen for PCS1. Over the three sessions, no significant differences were found between student self‐graded scores (p = 0.08), but faculty scores did differ, with the OSCE having higher scores than PCS3 (p < 0.01). Many students with faculty‐graded scores greater than 1 standard deviation below the mean scored themselves higher than faculty did.ConclusionFeedback after PCS1 did not significantly improve scores. Students with low faculty‐graded scores frequently scored themselves higher indicating low self‐awareness.
Background Health-care practitioners have opportunities to talk with clients about unhealthy behaviors. How practitioners approach these conversations involves skill to be effective. Thus, teaching health-care students to communicate empathetically with clients should promote effective client-practitioner conversations about health behavior change. The primary objective of this pilot trial was to assess the feasibility, acceptability, and appropriateness of a theoretically informed intervention designed to improve perspective-taking. Methods For inclusion in this randomized mixed-methods parallel two-arm trial, participants needed to be a student at the investigators’ Canadian university and have completed course content on behavior change communication. Using a 1:1 allocation ratio, participants in Respiratory, Physical, and Occupational Therapy; Nurse Practitioner; and Kinesiology programs were randomly assigned to full or partial intervention conditions. Full intervention participants completed a perspective-taking workshop and practiced perspective-taking prior to an in-lab dialogue with a client-actor (masked to condition) about physical activity. Partial intervention participants received the workshop after the dialogue. We assessed feasibility and appropriateness by comparing recruitment rates, protocol, and psychometric outcomes to criteria. We assessed acceptability (secondary outcome) by analyzing exit interviews. Results We screened and randomized 163 participants (82 = full intervention; 81 = partial intervention). We fell slightly short of our recruitment success criteria (10–15 participants per program) when 2/50 Occupational Therapy students participated. We met some but not all of our protocol criteria: Some full intervention participants did not practice perspective-taking before the dialogue, because they did not see anyone during the practice period or did not have a practice opportunity. Psychometric outcomes met the criteria, except for one measure that demonstrated ceiling effects and low reliability (Cronbach’s alpha < .70). There were no adverse events related to participation. Conclusions The intervention should be largely feasible, appropriate, and acceptable to deliver. We suggest changes that are large enough to warrant conducting another pilot study. We outline recommended improvements that are applicable to researchers and educators interested in recruitment, adherence to home practice, and online uptake of the intervention. Trial registration This trial was registered retrospectively on November 8, 2023, at https://clinicaltrials.gov/study/NCT06123507.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.