Rationale, aims, and objectivesProgrammatic assessment has been identified as a system‐oriented approach to achieving the multiple purposes for assessment within Competency‐Based Medical Education (CBME, i.e., formative, summative, and program improvement). While there are well‐established principles for designing and evaluating programs of assessment, few studies illustrate and critically interpret, what a system of programmatic assessment looks like in practice. This study aims to use systems thinking and the ‘two communities’ metaphor to interpret a model of programmatic assessment and to identify challenges and opportunities with operationalization.MethodAn interpretive case study was used to investigate how programmatic assessment is being operationalized within one competency‐based residency program at a Canadian university. Qualitative data were collected from residents, faculty, and program leadership via semi‐structured group and individual interviews conducted at nine months post‐CBME implementation. Data were analyzed using a combination of data‐based inductive analysis and theory‐derived deductive analysis.ResultsIn this model, Academic Advisors had a central role in brokering assessment data between communities responsible for producing and using residents' performance information for decision making (i.e., formative, summative/evaluative, and program improvement). As system intermediaries, Academic Advisors were in a privileged position to see how the parts of the assessment system contributed to the functioning of the whole and could identify which system components were not functioning as intended. Challenges were identified with the documentation of residents' performance information (i.e., system inputs); use of low‐stakes formative assessments to inform high‐stakes evaluative judgments about the achievement of competence standards; and gaps in feedback mechanisms for closing learning loops.ConclusionsThe findings of this research suggest that program stakeholders can benefit from a systems perspective regarding how their assessment practices contribute to the efficacy of the system as a whole. Academic Advisors are well positioned to support educational development efforts focused on overcoming challenges with operationalizing programmatic assessment.
BackgroundGrounded in a community-based participatory research (CBPR) framework, the PROUD (Participatory Research in Ottawa: Understanding Drugs) Study aims to better understand HIV risk and prevalence among people who use drugs in Ottawa, Ontario. The purpose of this paper is to describe the establishment of the PROUD research partnership.MethodsPROUD relies on peers’ expertise stemming from their lived experience with drug use to guide all aspects of this CBPR project. A Community Advisory Committee (CAC), comprised of eight people with lived experience, three allies and three ex-officio members, has been meeting since May 2012 to oversee all aspects of the project. Eleven medical students from the University of Ottawa were recruited to work alongside the committee. Training was provided on CBPR; HIV and harm reduction; and administering HIV point-of-care (POC) tests so that the CAC can play a key role in research design, data collection, analysis, and knowledge translation activities.ResultsFrom March-December 2013, the study enrolled 858 participants who use drugs (defined as anyone who has injected or smoked drugs other than marijuana in the last 12 months) into a prospective cohort study. Participants completed a one-time questionnaire administered by a trained peer or medical student, who then administered an HIV POC test. Recruitment, interviews and testing occurred in both the fixed research site and various community settings across Ottawa. With consent, prospective follow-up will occur through linkages to health care records available through the Institute for Clinical and Evaluation Sciences.ConclusionThe PROUD Study meaningfully engaged the communities of people who use drugs in Ottawa through the formation of the CAC, the training of peers as community-based researchers, and integrated KTE throughout the research project. This project successfully supported skill development across the team and empowered people with drug use experience to take on leadership roles, ensuring that this research process will promote change at the local level. The CBPR methods developed in this study provide important insights for future research projects with people who use drugs in other settings.
Background Simulation is increasingly being used in postgraduate medical education as an opportunity for competency assessment. However, there is limited direct evidence that supports performance in the simulation lab as a surrogate of workplace-based clinical performance for non-procedural tasks such as resuscitation in the emergency department (ED). We sought to directly compare entrustment scoring of resident performance in the simulation environment to clinical performance in the ED. Methods The resuscitation assessment tool (RAT) was derived from the previously implemented and studied Queen’s simulation assessment tool (QSAT) via a modified expert review process. The RAT uses an anchored global assessment scale to generate an entrustment score and narrative comments. Emergency medicine (EM) residents were assessed using the RAT on cases in simulation-based examinations and in the ED during resuscitation cases from July 2016 to June 2017. Resident mean entrustment scores were compared using Pearson’s correlation coefficient to determine the relationship between entrustment in simulation cases and in the ED. Inductive thematic analysis of written commentary was conducted to compare workplace-based with simulation-based feedback. Results There was a moderate, positive correlation found between mean entrustment scores in the simulated and workplace-based settings, which was statistically significant ( r = 0.630, n = 17, p < 0.01). Further, qualitative analysis demonstrated overall management and leadership themes were more common narratives in the workplace, while more specific task-based feedback predominated in the simulation-based assessment. Both workplace-based and simulation-based narratives frequently commented on communication skills. Conclusions In this single-center study with a limited sample size, assessment of residents using entrustment scoring in simulation settings was demonstrated to have a moderate positive correlation with assessment of resuscitation competence in the workplace. This study suggests that resuscitation performance in simulation settings may be an indicator of competence in the clinical setting. However, multiple factors contribute to this complicated and imperfect relationship. It is imperative to consider narrative comments in supporting the rationale for numerical entrustment scores in both settings and to include both simulation and workplace-based assessment in high-stakes decisions of progression. Electronic supplementary material The online version of this article (10.1186/s41077-019-0099-4) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.