Physicians who enrolled in the PCE at an early age and generalist physicians were particularly successful in establishing careers as clinician-investigators. Programs such as the PCE can help to sustain the workforce of physician-investigators.
Assessing residents and clinical fellows is a high-stakes activity. Effective assessment is important throughout training so that identified areas of strength and weakness can guide educational planning to optimize outcomes. Assessment has historically been underemphasized although medical education oversight organizations have strengthened requirements in recent years. Growing acceptance of competency-based medical education and its logical extension to competency-based time-variable (CB-TV) graduate medical education (GME) further highlights the importance of implementing effective evidence-based approaches to assessment. The Clinical Competency Committee (CCC) has emerged as a key programmatic structure in graduate medical education. In the context of launching a multi-specialty pilot of CB-TV GME in our health system, we have examined several program’s CCC processes and reviewed the relevant literature to propose enhancements to CCCs. We recommend that all CCCs fulfill three core goals, regularly applied to every GME trainee: (1) discern and describe the resident’s developmental status to individualize education, (2) determine readiness for unsupervised practice, and (3) foster self-assessment ability. We integrate the literature and observations from GME program CCCs in our institutions to evaluate how current CCC processes support or undermine these goals. Obstacles and key enablers are identified. Finally, we recommend ways to achieve the stated goals, including the following: (1) assess and promote the development of competency in all trainees, not just outliers, through a shared model of assessment and competency-based advancement; (2) strengthen CCC assessment processes to determine trainee readiness for independent practice; and (3) promote trainee reflection and informed self-assessment. The importance of coaching for competency, robust workplace-based assessments, feedback, and co-production of individualized learning plans are emphasized. Individual programs and their CCCs must strengthen assessment tools and frameworks to realize the potential of competency-oriented education.
Introduction There is limited information about whether OSCE during GME orientation can identify trainee communication deficits before these become evident via clinical performance evaluations. Methods Ninety-seven interns matriculating to eight residency programs in six specialties at four hospitals participated in a nine-station communication skills OSCE. Ratings were based on the ‘Kalamazoo, adapted’ communication skills checklist. Possible association with intern performance evaluations was assessed by repeated-measures logistic regression and ROC curves were generated. Results The mean OSCE score was 4.08±.27 with a range of 3.3–4.6. Baseline OSCE scores were associated with subsequent communication concerns recorded by faculty, based on 1591 evaluations. A 0.1-unit decrease in the OSCE communication score was associated with an 18% higher odds of being identified with a communication concern by faculty evaluation (odds ratio 1.18, 95% CI 1.01–1.36, p=0.034). ROC curves did not demonstrate a “cut-off” score (AUC= 0.558). Non-faculty evaluators were 3 to 5 times more likely than faculty evaluators to identify communication deficits, based on 1,900 evaluations. Conclusion Lower OSCE performance was associated with faculty communication concerns on performance evaluations; however, a “cut-off” score was not demonstrated that could identify trainees for potential early intervention. Multi-source evaluation also identified trainees with communication skills deficits.
Graduate medical education (GME) and Clinical Competency Committees (CCC) have been evolving to monitor trainee progression using competency-based medical education principles and outcomes, though evidence suggests CCCs fall short of this goal. Challenges include that evaluation data are often incomplete, insufficient, poorly aligned with performance, conflicting or of unknown quality, and CCCs struggle to organize, analyze, visualize, and integrate data elements across sources, collection methods, contexts, and time-periods, which makes advancement decisions difficult. Learning analytics have significant potential to improve competence committee decision making, yet their use is not yet commonplace. Learning analytics (LA) is the interpretation of multiple data sources gathered on trainees to assess academic progress, predict future performance, and identify potential issues to be addressed with feedback and individualized learning plans. What distinguishes LA from other educational approaches is systematic data collection and advanced digital interpretation and visualization to inform educational systems. These data are necessary to: 1) fully understand educational contexts and guide improvements; 2) advance proficiency among stakeholders to make ethical and accurate summative decisions; and 3) clearly communicate methods, findings, and actionable recommendations for a range of educational stakeholders. The ACGME released the third edition CCC Guidebook for Programs in 2020 and the 2021 Milestones 2.0 supplement of the Journal of Graduate Medical Education (JGME Supplement) presented important papers that describe evaluation and implementation features of effective CCCs. Principles of LA underpin national GME outcomes data and training across specialties; however, little guidance currently exists on how GME programs can use LA to improve the CCC process. Here we outline recommendations for implementing learning analytics for supporting decision making on trainee progress in two areas: 1) Data Quality and Decision Making, and 2) Educator Development.
Background Multi-source evaluation has demonstrated value for trainees, but is not generally provided to residency or fellowship program directors (PDs).
Background New approaches are needed to improve and destigmatize remediation in undergraduate medical education (UME). The COVID-19 pandemic magnified the need to support struggling learners to ensure competency and readiness for graduate medical education (GME). Clinical skills (CS) coaching is an underutilized approach that may mitigate the stigma of remedial learning. Methods A six-month CS coaching pilot was conducted at Harvard Medical School (HMS) as a destigmatized remedial learning environment for clerkship and post-clerkship students identified as ‘at risk’ based on objective structured clinical examinations (OSCE). The pilot entailed individual and group coaching with five faculty, direct bedside observation of CS, and standardized patient encounters with video review. Strengths-based coaching principles and appreciative inquiry were emphasized. Results Twenty-three students participated in the pilot: 14 clerkship students (cohort 1) and 9 post-clerkship students (cohort 2). All clerkship students (cohort 1) demonstrated sustained improvement in CS across three OSCEs compared to baseline: at pilot close, at 6-months post pilot, and at 21-24 months post-pilot all currently graduating students (10/10, 100%) passed the summative OSCE, an HMS graduation requirement. All post-clerkship students (cohort 2) passed the HMS graduation OSCE (9/9,100%). Feedback survey results included clerkship students (9/14; 64%) and post-clerkship students (7/9; 78%); all respondents unanimously agreed that individual coaching was “impactful to my clinical learning and practice”. Faculty and leadership fully supported the pilot as a destigmatized and effective approach to remediation. Conclusion Remediation has an essential and growing role in medical schools. CS coaching for remedial learning can reduce stigma, foster a growth mindset, and support sustained progress for ‘at risk’ early clerkship through final year students. An “implementation template” with suggested tools and timelines can be locally adapted to guide CS coaching for UME remediation. The CS coaching pilot model is feasible and can be generalized to many UME programs.
Background New approaches are needed to improve and destigmatize remediation in undergraduate medical education (UME). The COVID-19 pandemic magnified the need to support struggling learners to ensure competency and readiness for graduate medical education (GME). Clinical skills (CS) coaching is an underutilized approach that may mitigate the stigma of remedial learning. Methods A six-month CS coaching pilot was conducted at Harvard Medical School (HMS) as a destigmatized remedial learning environment for clerkship and post-clerkship students identified as ‘at risk’ based on objective structured clinical examinations (OSCE). The pilot entailed individual and group coaching with five faculty, direct bedside observation of CS, and standardized patient encounters with video review. Strengths-based coaching principles and appreciative inquiry were emphasized. Results Twenty-three students participated in the pilot: 14 clerkship students (cohort 1) and 9 post-clerkship students (cohort 2). All clerkship students (cohort 1) demonstrated sustained improvement in CS across three OSCEs compared to baseline: at pilot close, at 6-months post pilot, and at 21-24 months post-pilot all currently graduating students (10/10, 100%) passed the summative OSCE, an HMS graduation requirement. All post-clerkship students (cohort 2) passed the HMS graduation OSCE (9/9,100%). Feedback survey results included clerkship students (9/14; 64%) and post-clerkship students (7/9; 78%); all respondents unanimously agreed that individual coaching was “impactful to my clinical learning and practice”. Faculty and leadership fully supported the pilot as a destigmatized and effective approach to remediation. Conclusion Remediation has an essential and growing role in medical schools. CS coaching for remedial learning can reduce stigma, foster a growth mindset, and support sustained progress for ‘at risk’ early clerkship through final year students. An “implementation template” with suggested tools and timelines can be locally adapted to guide CS coaching for UME remediation. The CS coaching pilot model is feasible and can be generalized to many UME programs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.