Background The COVID-19 pandemic has profoundly affected assessment practices in medical education necessitating distancing from the traditional classroom. However, safeguarding academic integrity is of particular importance for high-stakes medical exams. We utilised remote proctoring to administer safely and reliably a proficiency-test for admission to the Advanced Master of General Practice (AMGP). We compared exam results of the remote proctored exam group to those of the on-site proctored exam group. Methods A cross-sectional design was adopted with candidates applying for admission to the AMGP. We developed and applied a proctoring software operating on three levels to register suspicious events: recording actions, analysing behaviour, and live supervision. We performed a Mann-Whitney U test to compare exam results from the remote proctored to the on-site proctored group. To get more insight into candidates’ perceptions about proctoring, a post-test questionnaire was administered. An exploratory factor analysis was performed to explore quantitative data, while qualitative data were thematically analysed. Results In total, 472 (79%) candidates took the proficiency-test using the proctoring software, while 121 (20%) were on-site with live supervision. The results indicated that the proctoring type does not influence exam results. Out of 472 candidates, 304 filled in the post-test questionnaire. Two factors were extracted from the analysis and identified as candidates’ appreciation of proctoring and as emotional distress because of proctoring. Four themes were identified in the thematic analysis providing more insight on candidates’ emotional well-being. Conclusions A comparison of exam results revealed that remote proctoring could be a viable solution for administering high-stakes medical exams. With regards to candidates’ educational experience, remote proctoring was met with mixed feelings. Potential privacy issues and increased test anxiety should be taken into consideration when choosing a proctoring protocol. Future research should explore generalizability of these results utilising other proctoring systems in medical education and in other educational settings.
Background: Efficient selection of medical students in GP training plays an important role in improving healthcare quality. The aim of this study was to collect quantitative and qualitative validity evidence of a multicomponent proficiency-test for identifying underperforming students in cognitive and non-cognitive competencies, prior to entering postgraduate GP Training. From 2016 to 2018, 894 medical GP students in four Flemish universities in Belgium registered to take a multicomponent proficiency-test before admission to postgraduate GP Training. Data on students were obtained from the proficiency-test as a test-score and from traineeship mentors' narrative reports. Results: In total, 849 students took the multicomponent proficiency-test during 2016-2018. Test scores were normally distributed. Five different descriptive labels were extracted from mentors' narrative reports based on thematic analysis, considering both cognitive and non-cognitive competences. Chi-square tests and odds ratio showed a significant association between students scoring low on the proficiency-test and having gaps in cognitive and non-cognitive competencies during GP traineeship. Conclusion: A multicomponent proficiency-test could detect underperforming students prior to postgraduate GP Training. Students that ranked in the lowest score quartile had a higher likelihood of being labelled as underperforming than students in the highest score quartile. Therefore, a low score in the multicomponent proficiency-test could indicate the need for closer guidance and early remediating actions focusing on both cognitive and non-cognitive competencies.
Background In view of the exponential use of the CanMEDS framework along with the lack of rigorous evidence about its applicability in workplace-based medical trainings, further exploring is necessary before accepting the framework as accurate and reliable competency outcomes for postgraduate medical trainings. Therefore, this study investigated whether the CanMEDS key competencies could be used, first, as outcome measures for assessing trainees’ competence in the workplace, and second, as consistent outcome measures across different training settings and phases in a postgraduate General Practitioner’s (GP) Training. Methods In a three-round web-based Delphi study, a panel of experts (n = 25–43) was asked to rate on a 5-point Likert scale whether the CanMEDS key competencies were feasible for workplace-based assessment, and whether they could be consistently assessed across different training settings and phases. Comments on each CanMEDS key competency were encouraged. Descriptive statistics of the ratings were calculated, while content analysis was used to analyse panellists’ comments. Results Out of twenty-seven CanMEDS key competencies, consensus was not reached on six competencies for feasibility of assessment in the workplace, and on eleven for consistency of assessment across training settings and phases. Regarding feasibility, three out of four key competencies under the role “Leader”, one out of two competencies under the role “Health Advocate”, one out of four competencies under the role “Scholar”, and one out of four competencies under the role “Professional” were deemed as not feasible for assessment in a workplace setting. Regarding consistency, consensus was not achieved for one out of five competencies under “Medical Expert”, two out of five competencies under “Communicator”,one out of three competencies under “Collaborator”, one out of two under “Health Advocate”, one out of four competencies under “Scholar”, one out of four competencies under “Professional”. No competency under the role “Leader” was deemed to be consistently assessed across training settings and phases. Conclusions The findings indicate a mismatch between the initial intent of the CanMEDS framework and its applicability in the context of workplace-based assessment. Although the CanMEDS framework could offer starting points, further contextualization of the framework is required before implementing in workplace-based postgraduate medical trainings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.