To achieve truly effective feedback, the health professions must nurture recipient reflection-in-action. This builds on self-monitoring informed by external feedback. An integrated approach must be developed to support a feedback culture. Early training and experience such as peer feedback may over time support the required cultural change. Opportunities to provide feedback must not be missed, including those to impart potentially powerful feedback from high-stakes assessments. Feedback must be conceptualised as a supported sequential process rather than a series of unrelated events. Only this sustained approach will maximise any effect.
Objective To investigate the literature for evidence that workplace based assessment affects doctors' education and performance. Design Systematic review. Data sources The primary data sources were the databases Journals@Ovid, Medline, Embase, CINAHL, PsycINFO, and ERIC. Evidence based reviews (Bandolier, Cochrane Library, DARE, HTA Database, and NHS EED) were accessed and searched via the Health Information Resources website. Reference lists of relevant studies and bibliographies of review articles were also searched. Review methods Studies of any design that attempted to evaluate either the educational impact of workplace based assessment, or the effect of workplace based assessment on doctors' performance, were included. Studies were excluded if the sampled population was non-medical or the study was performed with medical students. Review articles, commentaries, and letters were also excluded. The final exclusion criterion was the use of simulated patients or models rather than real life clinical encounters. Results Sixteen studies were included. Fifteen of these were non-comparative descriptive or observational studies; the other was a randomised controlled trial. Study quality was mixed. Eight studies examined multisource feedback with mixed results; most doctors felt that multisource feedback had educational value, although the evidence for practice change was conflicting. Some junior doctors and surgeons displayed little willingness to change in response to multisource feedback, whereas family physicians might be more prepared to initiate change. Performance changes were more likely to occur when feedback was credible and accurate or when coaching was provided to help subjects identify their strengths and weaknesses. Four studies examined the mini-clinical evaluation exercise, one looked at direct observation of procedural skills, and three were concerned with multiple assessment methods: all these studies reported positive results for the educational impact of workplace based assessment tools. However, there was no objective evidence of improved performance with these tools. Conclusions Considering the emphasis placed on workplace based assessment as a method of formative performance assessment, there are few published articles exploring its impact on doctors' education and performance. This review shows that multisource feedback can lead to performance improvement, although individual factors, the context of the feedback, and the presence of facilitation have a profound effect on the response. There is no evidence that alternative workplace based assessment tools (mini-clinical evaluation exercise, direct observation of procedural skills, and case based discussion) lead to improvement in performance, although subjective reports on their educational impact are positive.
Medical Education 2010: 44: 449–458 Context Medical education in the UK has recently undergone radical reform. Tomorrow’s Doctors has prescribed undergraduate curriculum change and the Foundation Programme has overhauled postgraduate education. Objectives This study explored the experiences of junior doctors during their first year of clinical practice. In particular, the study sought to gain an understanding of how junior doctors experienced the transition from the role of student to that of practising doctor and how well their medical school education had prepared them for this. Methods The study used qualitative methods comprising of semi‐structured interviews and audio diary recordings with newly qualified doctors based at the Peninsula Foundation School in the UK. Purposive sampling was used and 31 of 186 newly qualified doctors self‐selected from five hospital sites. All 31 participants were interviewed once and 17 were interviewed twice during the year. Ten of the participants also kept audio diaries. Interview and audio diary data were transcribed verbatim and thematically analysed with the aid of a qualitative data analysis software package. Results The findings show that, despite recent curriculum reforms, most participants still found the transition stressful. Dealing with their newly gained responsibility, managing uncertainty, working in multi‐professional teams, experiencing the sudden death of patients and feeling unsupported were important themes. However, the stress of transition was reduced by the level of clinical experience gained in the undergraduate years. Conclusions Medical schools need to ensure that students are provided with early exposure to clinical environments which allow for continuing ‘meaningful’ contact with patients and increasing opportunities to ‘act up’ to the role of junior doctor, even as students. Patient safety guidelines present a major challenge to achieving this, although with adequate supervision the two aims are not mutually exclusive. Further support and supervision should be made available to junior doctors in situations where they are dealing with the death of a patient and on surgical placements.
Objective To determine whether a multisource feedback questionnaire, SPRAT (Sheffield peer review assessment tool), is a feasible and reliable assessment method to inform the record of in-training assessment for paediatric senior house officers and specialist registrars. Design Trainees' clinical performance was evaluated using SPRAT sent to clinical colleagues of their choosing. Responses were analysed to determine variables that affected ratings and their measurement characteristics. Setting Three tertiary hospitals and five secondary hospitals across a UK deanery. Participants 112 paediatric senior house officers and middle grades. Main outcome measures 95% confidence intervals for mean ratings; linear and hierarchical regression to explore potential biasing factors; time needed for the process per doctor. Results 20 middle grades and 92 senior house officers were assessed using SPRAT to inform their record of in-training assessment; 921/1120 (82%) of their proposed raters completed a SPRAT form. As a group, specialist registrars (mean 5.22, SD 0.34) scored significantly higher (t = − 4.765) than did senior house officers (mean 4.81, SD 0.35) (P < 0.001). The grade of the doctor accounted for 7.6% of the variation in the mean ratings. The hierarchical regression showed that only 3.4% of the variation in the means could be additionally attributed to three main factors (occupation of rater, length of working relationship, and environment in which the relationship took place) when the doctor's grade was controlled for (significant F change < 0.001). 93 (83%) of the doctors in this study would have needed only four raters to achieve a reliable score if the intent was to determine if they were satisfactory. The mean time taken to complete the questionnaire by a rater was six minutes. Just over an hour of administrative time is needed for each doctor. Conclusions SPRAT seems to be a valid way of assessing large numbers of doctors to support quality assurance procedures for training programmes. The feedback from SPRAT can also be used to inform personal development planning and focus quality improvements.
BACKGROUND The assessment of clinical procedural skills has traditionally focused on technical elements alone. However, in real practice, clinicians are expected to be able to integrate technical with communication and other professional skills. We describe an integrated procedural performance instrument (IPPI), where clinicians are assessed on 12 clinical procedures in a simulated clinical setting which combines simulated patients (SPs) with inanimate models or items of medical equipment. Candidates are observed remotely by assessors whose data are fed back to the clinician within 24 hours of the assessment. This paper describes the feasibility of IPPI.RESULTS A full-scale IPPI and 2 pilot studies with trainee and qualified health care professionals has yielded an extensive data set including 585 scenario evaluations from candidates, 60 from clinical assessors and 31 from simulated patients (SPs). Interview and questionnaire data showed that for the majority of candidates IPPI provided a powerful and valuable learning experience. Realism was rated highly. Remote and real-time assessment worked effectively, although for some procedures limited camera resolution affected observation of fine details.DISCUSSION IPPI offers an innovative approach to assessing clinical procedural skills. Although resource-intensive, it has the potential to provide insight into individual's performance over a spectrum of clinical scenarios and at no risk to the safety of patients. Additional benefits of IPPI include assessment in real time from experts (allowing remote rating by external examiners) as well as provision of feedback from simulated patients.
As part of an assessment programme, mini-PAT appears to provide a valid way of collating colleague opinions to help reliably assess Foundation trainees.
Medical educators' social cohesion is threatened by their sense that educators are poor relations compared with scientists and clinicians. While medical educators' identities may be in crisis, they also are changing, a change needed for medical education, medical education research, the practice of medicine, and ultimately patient care.
There is increasing evidence that multisource feedback (MSF) assesses two generic traits, clinical care and psychosocial skills. The validity of MSF is threatened by systematic bias, namely leniency bias and the seniority of assessors. Unregulated self-selection of assessors needs to end.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.