In this article, we outline criteria for good assessment that include: (1) validity or coherence, (2) reproducibility or consistency, (3) equivalence, (4) feasibility, (5) educational effect, (6) catalytic effect, and (7) acceptability. Many of the criteria have been described before and we continue to support their importance here. However, we place particular emphasis on the catalytic effect of the assessment, which is whether the assessment provides results and feedback in a fashion that creates, enhances, and supports education. These criteria do not apply equally well to all situations. Consequently, we discuss how the purpose of the test (summative versus formative) and the perspectives of stakeholders (examinees, patients, teachers-educational institutions, healthcare system, and regulators) influence the importance of the criteria. Finally, we offer a series of practice points as well as next steps that should be taken with the criteria. Specifically, we recommend that the criteria be expanded or modified to take account of: (1) the perspectives of patients and the public, (2) the intimate relationship between assessment, feedback, and continued learning, (3) systems of assessment, and (4) accreditation systems.
Abstract:Objectives: Doctors make many transitions whilst they are training and throughout their ensuing careers. Despite studies showing that transitions in other high risk professions such as aviation have been linked to increased risk in the form of adverse outcomes, the effects of changes on doctors' performance and consequent implications for patient safety have been under-researched. The purpose of this project was to investigate the effects of transitions upon medical performance. Methods:The project sought to focus on the inter-relationships between doctors and the complex work settings into which they were transitioning. To this end, a 'collective' case study of doctors was designed. Key transitions for Foundation Year and Specialist Trainee doctors were studied. Four levels of the case were examined: the regulatory and policy context; employer requirements; the clinical teams in which doctors worked; and the doctors themselves. Data collection included interviews, observations and desk-based research.. Results:We identified a number of problems with doctors' transitions that can all adversely affect performance. A) Transitions are regulated but not systematically monitored. B) Actual practice (as observed and reported) was determined much more by situational and contextual factors than by the formal (regulatory and management) frameworks. C) Trainees' and health professionals' accounts of their actual experience of work showed how performance is dependent on local learning environment. D) We found that the increased regulation of clinical activity through protocols and care pathways helps trainees' performance whilst the less regulated aspects of work such as rotas, induction and multiple transitions within rotations can impede the transition. Conclusions:Transitions may be reframed as critically intensive learning periods (CILPs) in which doctors engage with the particularities of the setting and establish working relationships with doctors and other professionals. Institutions and wards have their own learning cultures which may or may not recognise that transitions are CILPS. The extent to which these cultures take account of transitions as CILPs will contribute to the performance of new doctors. There are therefore implications for practice, and for policy, regulation and research.
Practice points It is important to always evaluate the quality of a high stakes assessment, such as an OSCE, through the use of a range of appropriate metrics. When judging the quality of an OSCE, it is very important to employ more than one metric to gain an all round view of the assessment quality. AbstractWith an increasing use of criterion based assessment techniques in both undergraduate and postgraduate healthcare programmes, there is a consequent need to ensure the quality and rigour of these assessments. The obvious question for those responsible for delivering assessment is how is this 'quality' measured, and what mechanisms might there be that allow improvements in assessment quality over time to be demonstrated? Whilst a small base of literature exists, few papers give more than one or two metrics as measures of quality in OSCEs.In this guide, aimed at assessment practitioners, the authors aim to review the metrics that are available for measuring quality and indicate how a rounded picture of OSCE assessment quality may be constructed by using a variety of such measures, and also to consider which characteristics of the OSCE are appropriately judged by which measure(s). The authors will discuss the quality issues both at the individual station level and across the complete clinical assessment as a whole, using a series of 'worked examples' drawn from OSCE data sets from the authors' institution.
Objectives There is increasing emphasis on encouraging more active involvement of patients in medical education. This is based on the recognition of patients as ‘experts’ in their own medical conditions and may help to enhance student experiences of real‐world medicine. This systematic review provides a summary of evidence for the role and effectiveness of real patient involvement in medical education. Methods MEDLINE, EMBASE, ERIC, PsychINFO, Sociological Abstracts and CINAHL were searched from the start of the databases to July 2007. Three key journals and reference lists of existing reviews were also searched. Articles published in English and reporting primary empirical research on the involvement of real patients in medical education were included. The synthesis of findings is integrated by narrative structured in such a way to address the research questions. Results A total of 47 articles were included in the review. The majority of studies reported patients in the role of teachers only; others described patient involvement in assessment or curriculum development or in combined roles. Patient involvement was recommended in order to bring the patient voice into education. There were several examples of how to recruit and train patients to perform an educational role. The effectiveness of patient involvement was measured by evaluation studies and reported improvements in skills. Conclusions There was limited evidence of the long‐term effectiveness of patient involvement and issues of ethics, psychological impact and influence on education policy were poorly explored. Future studies should address these issues and should explore the practicalities of sustaining such educational programmes within medical schools.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.