An assessment centre model based on the rating of non-technical skills can produce a reliable and valid selection tool for recruitment to speciality training in anaesthesia. Early results on predictive validity are encouraging and justify further development and evaluation.
Interprofessional point of care or in situ simulation is used as a training tool in our operating theatre directorate with the aim of improving crisis behaviours. This study aimed to assess the impact of interprofessional point of care simulation on the safety culture of operating theatres. A validated Safety Attitude Questionnaire was administered to staff members before each simulation scenario and then re-administered to the same staff members after 6-12 months. Pre- and post-training Safety Attitude Questionnaire-Operating Room (SAQ-OR) scores were compared using paired sample t-tests. Analysis revealed a statistically significant perceived improvement in both safety (p < 0.001) and teamwork (p = 0.013) climate scores (components of safety culture) 6-12 months after interprofessional simulation training. A growing body of literature suggests that a positive safety culture is associated with improved patient outcomes. Our study supports the implementation of point of care simulation as a useful intervention to improve safety culture in theatres.
SummaryNon-technical skills are recognised as crucial to good anaesthetic practice. We designed and evaluated a specialty-specific tool to assess non-technical aspects of trainee performance in theatre, based on a system previously found reliable in a recruitment setting. We compared inter-rater agreement (multir-ater kappa) for live assessments in theatre with that in a selection centre and a video-based rater training exercise. Twenty-seven trainees participated in the first in-theatre assessment round and 40 in the second. Round-1 scores had poor inter-rater agreement (mean kappa = 0.20) and low reliability (generalisability coefficient G = 0.50). A subsequent assessor training exercise showed good inter-rater agreement, (mean kappa = 0.79) but did not improve performance of the assessment tool when used in round 2 (mean kappa = 0.14, G = 0.42). Inter-rater agreement in two selection centres (mean kappa = 0.61 and 0.69) exceeded that found in theatre. Assessment tools that perform reliably in controlled settings may not do so in the workplace. The Tooke report on Modernising Medical Careers [1] highlighted the need for specialty training to focus on the acquisition of excellence, rather than competence alone. Recent editorials have attempted to define how excellence in professionalism and other domains manifest in the workplace and highlight the importance of non-technical skills [2,3]. Workplace-based assessments are an invaluable tool for assessing professional practice in a comprehensive and valid way; however, only the mini Clinical Evaluation Exercise (mini-CEX) and multi-source feedback amongst the assessment tools currently used in the UK attempt to assess nontechnical skills. In addition, these tools focus on the achievement of basic clinical competence and employ methods with questionable accuracy, reliability and validity [4]. The mini-CEX has been shown to have wide inter-rater variability that results in poor discrimination between anaesthetic trainees [5], that is exacerbated by the lack of performance benchmarking and behavioural descriptors on the marking sheet. Variable scoring leniency and the face-toface nature of the assessment may also contribute to inaccurate scores [5,6]. Studies have established the value of multi-source feedback in certain settings [7], although concerns have been raised about victimisation by multisource feedback raters [8]. These current workplace-based assessment tools have been described as stressful, timeconsuming, artificial and difficult to organise [8], and rely on immediate access to an electronic portfolio in many specialties. Large numbers of assessors are required for each trainee to achieve a reliable assessment, suggesting that the feasibility of their use in high-stakes assessment is low [9]. In addition, students are able to select individual cases, case difficulty and specific assessors, despite evidence that the relationship between the observer and student may adversely influence the validity of the assessment [6,10,11]. None of the tools described a...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.