Introduction: Reliable team assessment has become a priority because of growing emphasis on interprofessional education and team-based care. Objective rating scales are needed to evaluate interprofessional student teams and individuals and provide real-time feedback. Methods: In response to a need for behavioral rating scales, we modified the McMaster-Ottawa Scale from a 9-point to a 3-point scale and added descriptive behavioral anchors to define three levels of competency (i.e., below, at, and above expected). This modification is intended to provide consistent rating of individuals and teams in patient settings. We then developed a demonstration video using actors representing four professions to demonstrate the three levels of performance within the team. Our faculty rater tool, consisting of the modified scale and video, is designed to provide standardized ratings in interprofessional educational settings that involve patient care. Results: We conducted training sessions with 40 faculty members from seven professions (medicine, dentistry, occupational therapy, nursing, pharmacy, physician assistant, and psychology) over a 2-year period. Immediately after each training session, two trained faculty observers rated interprofessional student teams as they conducted history and assessments on standardized patients. Observer scores were compared with one another and with standard expert ratings of the same teams. Trained observer ratings were consistent across the pairs. The observer training can be conducted within 60-90 minutes with the tool. Discussion: Results of our implementation of the faculty rater tool confirm that the modified McMaster-Ottawa Scale is feasible to administer in clinical settings and that the demonstration video can be easily adopted for standardizing observer ratings.
To examine concordance between in-room and video faculty ratings of interprofessional behaviors in a standardized team objective structured clinical encounter (TOSCE). In-room and video-rated student performance scores in an interprofessional 2-station TOSCE were compared using a validated 3-point scale assessing six team competencies. Scores for each student were derived from two in-room faculty members and one faculty member who viewed video recordings of the same team encounter from equivalent visual vantage points. All faculty members received the same rigorous rater training. Paired sample t-tests were used to compare individual student scores. McNemar's test was used to compare student pass/fail rates to determine the impact of rating modality on performance scores. In-room and video student scores were captured for 12 novice teams (47 students) with each team consisting of students from four professions (medicine, pharmacy, physician assistant, nursing). Video ratings were consistently lower for all competencies and significantly lower for competencies of roles and responsibilities, and conflict management. Using a criterion of an average score of 2 out of 3 for at least one station for passing, 56% of students passed when rated in-room compared with 20% when rated by video. In-room and video ratings are not equal. Educators should consider scoring discrepancies based on modality when assessing team behaviors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.