A web-based module and online video are superior to verbal instructions for improving caregiver knowledge on management of children's fracture pain without improvement in functional outcomes.
Background
The residency selection process relies on subjective information in applications, as well as subjective assessment of applications by reviewers. This inherent subjectivity makes residency selection prone to poor reliability between those reviewing files.
Objective
We compared the interrater reliability of 2 assessment tools during file review: one rating applicant traits (ie, leadership, communication) and the other using a global rating of application elements (ie, curriculum vitae, reference letters).
Methods
Ten file reviewers were randomized into 2 groups, and each scored 7 general surgery applications from the 2019–2020 cycle. The first group used an element-based (EB) scoring tool, while the second group used a trait-based (TB) scoring tool. Feedback was collected, discrimination capacities were measured using variation in scores, and interrater reliability (IRR) was calculated using intraclass correlation (ICC) in a 2-way random effects model.
Results
Both tools identified the same top-ranked and bottom-ranked applicants; however, discrepancies were noted for middle-ranked applicants. The score range for the 5 middle-ranked applicants was greater with the TB tool (6.43 vs 3.80), which also demonstrated fewer tie scores. The IRR for TB scoring was superior to EB scoring (ICC [2, 5] = 0.82 vs 0.55). The TB tool required only 2 raters to achieve an ICC ≥ 0.70.
Conclusions
Using a TB file review strategy can facilitate file review with improved reliability compared to EB, and a greater spread of candidate scores. TB file review potentially offers programs a feasible way to optimize and reflect their institution's core values in the process.
Background The resident selection process involves the analysis of multiple data points, including letters of reference (LORs), which are inherently subjective in nature. Objective We assessed the frequency with which LORs use quantitative terms to describe applicants and to assess whether the use of these terms reflects the ranking of trainees in the final selection process. Methods A descriptive study analyzing LORs submitted by Canadian medical graduate applicants to the University of Ottawa General Surgery Program in 2019 was completed. We collected demographic information about applicants and referees and recorded the use of preidentified quantitative descriptors (eg, best, above average). A 10% audit of the data was performed. Descriptive statistics were used to analyze the demographics of our letters as well as the frequency of use of the quantitative descriptors. Results Three hundred forty-three LORs for 114 applicants were analyzed. Eighty-two percent (291 of 343) of LORs used quantitative descriptors. Eighty-four percent (95 of 113) of applicants were described as above average, and 45% (51 of 113) were described as the “best” by at least 1 letter. The candidates described as the “best” ranked anywhere from second to 108th in our ranking system. Conclusions Most LORs use quantitative descriptors. These terms are generally positive, and while the use does discriminate between different applicants, this was not helpful in the context of ranking applicants in our file review process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.