Background Residents may benefit from simulated practice with personalized feedback to prepare for high-stakes disclosure conversations with patients after harmful errors and to meet American Council on Graduate Medical Education mandates. Ideally, feedback would come from patients who have experienced communication after medical harm, but medical researchers and leaders have found it difficult to reach this community, which has made this approach impractical at scale. The Video-Based Communication Assessment app is designed to engage crowdsourced laypeople to rate physician communication skills but has not been evaluated for use with medical harm scenarios. Objective We aimed to compare the reliability of 2 assessment groups (crowdsourced laypeople and patient advocates) in rating physician error disclosure communication skills using the Video-Based Communication Assessment app. Methods Internal medicine residents used the Video-Based Communication Assessment app; the case, which consisted of 3 sequential vignettes, depicted a delayed diagnosis of breast cancer. Panels of patient advocates who have experienced harmful medical error, either personally or through a family member, and crowdsourced laypeople used a 5-point scale to rate the residents’ error disclosure communication skills (6 items) based on audiorecorded responses. Ratings were aggregated across items and vignettes to create a numerical communication score for each physician. We used analysis of variance, to compare stringency, and Pearson correlation between patient advocates and laypeople, to identify whether rank order would be preserved between groups. We used generalizability theory to examine the difference in assessment reliability between patient advocates and laypeople. Results Internal medicine residents (n=20) used the Video-Based Communication Assessment app. All patient advocates (n=8) and 42 of 59 crowdsourced laypeople who had been recruited provided complete, high-quality ratings. Patient advocates rated communication more stringently than crowdsourced laypeople (patient advocates: mean 3.19, SD 0.55; laypeople: mean 3.55, SD 0.40; P<.001), but patient advocates’ and crowdsourced laypeople’s ratings of physicians were highly correlated (r=0.82, P<.001). Reliability for 8 raters and 6 vignettes was acceptable (patient advocates: G coefficient 0.82; crowdsourced laypeople: G coefficient 0.65). Decision studies estimated that 12 crowdsourced layperson raters and 9 vignettes would yield an acceptable G coefficient of 0.75. Conclusions Crowdsourced laypeople may represent a sustainable source of reliable assessments of physician error disclosure skills. For a simulated case involving delayed diagnosis of breast cancer, laypeople correctly identified high and low performers. However, at least 12 raters and 9 vignettes are required to ensure adequate reliability and future studies are warranted. Crowdsourced laypeople rate less stringently than raters who have experienced harm. Future research should examine the value of the Video-Based Communication Assessment app for formative assessment, summative assessment, and just-in-time coaching of error disclosure communication skills.
Background Physician delivered weight management counseling (WMC) occurs infrequently and physicians report lack of training and poor self-efficacy. The purpose of this study was to develop and test the Video-based Communication Assessment (VCA) for weight management counseling (WMC) training in medical residents. Methods This study was a mixed methods pilot conducted in 3 phases. First, we created five vignettes based on our prior data and expert feedback, then administered the vignettes via the VCA to Internal Medicine categorical residents (n = 16) from a University Medical School. Analog patients rated responses and also provided comments. We created individualized feedback reports which residents were able to view on the VCA. Lastly, we conducted debriefing interviews with the residents (n = 11) to obtain their feedback on the vignettes and personalized feedback. Interviews were transcribed, and we used thematic analysis to generate and apply codes, followed by identifying themes. Results Descriptive statistics were calculated and learning points were created for the individualized feedback reports. In VCA debriefing interviews with residents, five themes emerged: 1) Overall the VCA was easy to use, helpful and more engaging than traditional learning and assessment modes, 2) Patient scenarios were similar to those encountered in the clinic, including diversity, health literacy and different stages of change, 3) The knowledge, skills, and reminders from the VCA can be transferred to practice, 4) Feedback reports were helpful, to the point and informative, including the exemplar response of how to best respond to the scenario, and 5) The VCA provide alternatives and practice scenarios to real-life patient situations when they aren’t always accessible. Conclusions We demonstrated the feasibility and acceptability of the VCA, a technology delivered platform, for delivering WMC to residents. The VCA exposed residents to diverse patient experiences and provided potential opportunities to tailor providers responses to sociological and cultural factors in WMC scenarios. Future work will examine the effect of the VCA on WMC in actual clinical practice.
Background US residents require practice and feedback to meet Accreditation Council for Graduate Medical Education mandates and patient expectations for effective communication after harmful errors. Current instructional approaches rely heavily on lectures, rarely provide individualized feedback to residents about communication skills, and may not assure that residents acquire the skills desired by patients. The Video-based Communication Assessment (VCA) app is a novel tool for simulating communication scenarios for practice and obtaining crowdsourced assessments and feedback on physicians’ communication skills. We previously established that crowdsourced laypeople can reliably assess residents’ error disclosure skills with the VCA app. However, its efficacy for error disclosure training has not been tested. Objective We aimed to evaluate the efficacy of using VCA practice and feedback as a stand-alone intervention for the development of residents’ error disclosure skills. Methods We conducted a pre-post study in 2020 with pathology, obstetrics and gynecology, and internal medicine residents at an academic medical center in the United States. At baseline, residents each completed 2 specialty-specific VCA cases depicting medical errors. Audio responses were rated by at least 8 crowdsourced laypeople using 6 items on a 5-point scale. At 4 weeks, residents received numerical and written feedback derived from layperson ratings and then completed 2 additional cases. Residents were randomly assigned cases at baseline and after feedback assessments to avoid ordinal effects. Ratings were aggregated to create overall assessment scores for each resident at baseline and after feedback. Residents completed a survey of demographic characteristics. We used a 2×3 split-plot ANOVA to test the effects of time (pre-post) and specialty on communication ratings. Results In total, 48 residents completed 2 cases at time 1, received a feedback report at 4 weeks, and completed 2 more cases. The mean ratings of residents’ communication were higher at time 2 versus time 1 (3.75 vs 3.53; P<.001). Residents with prior error disclosure experience performed better at time 1 compared to those without such experience (ratings: mean 3.63 vs mean 3.46; P=.02). No differences in communication ratings based on specialty or years in training were detected. Residents’ communication was rated higher for angry cases versus sad cases (mean 3.69 vs mean 3.58; P=.01). Less than half of all residents (27/62, 44%) reported prior experience with disclosing medical harm to patients; experience differed significantly among specialties (P<.001) and was lowest for pathology (1/17, 6%). Conclusions Residents at all training levels can potentially improve error disclosure skills with VCA practice and feedback. Error disclosure curricula should prepare residents for responding to various patient affects. Simulated error disclosure may particularly benefit trainees in diagnostic specialties, such as pathology, with infrequent real-life error disclosure practice opportunities. Future research should examine the effectiveness, feasibility, and acceptability of VCA within a longitudinal error disclosure curriculum.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.