The script concordance test (SCT) is used in health professions education to assess a specific facet of clinical reasoning competence: the ability to interpret medical information under conditions of uncertainty. Grounded in established theoretical models of knowledge organization and clinical reasoning, the SCT has three key design features: (1) respondents are faced with ill-defined clinical situations and must choose between several realistic options; (2) the response format reflects the way information is processed in challenging problem-solving situations; and (3) scoring takes into account the variability of responses of experts to clinical situations. SCT scores are meant to reflect how closely respondents' ability to interpret clinical data compares with that of experienced clinicians in a given knowledge domain. A substantial body of research supports the SCT's construct validity, reliability, and feasibility across a variety of health science disciplines, and across the spectrum of health professions education from pre-clinical training to continuing professional development. In practice, its performance as an assessment tool depends on careful item development and diligent panel selection. This guide, intended as a primer for the uninitiated in SCT, will cover the basic tenets, theoretical underpinnings, and construction principles governing script concordance testing.
Medical Education 2012: 46: 357–365 Context Current debate in medical education focuses on the nature of ‘competency‐based medical education’ (CBME) and whether or not it should be adopted. Many medical schools claim to run ‘competency‐based’ curricula, but the structure of their programmes can differ radically. A review of the existing CBME literature reveals that little attention has been paid to defining the concept of competence. A straightforward examination of what is meant by the term ‘competence’ is noticeably missing from the literature, despite its impact on medical training. Objectives This paper aims to illustrate the varying conceptions of ‘competence’ by comparing and contrasting definitions provided in the health sciences education literature and discussing their respective impacts on medical education. Methods A systematic review of recent publications in medical education journals published in English and French was conducted to extract definitions of competence or, if definitions were not explicitly stated, to derive the authors’ implicit conception of competence. A sample of 14 definitions from articles in the health sciences education field was studied using thematic analysis. Results There is agreement that competence is composed of knowledge, skills and other components. Although agreement about the nature of these other components is lacking, attitudes and values are suggested to be essential ingredients of competence. Furthermore, a clear divergence in conceptions of how a competent person utilises these components is apparent. One view specifies that competence involves selecting components according to specific situations, as required. A second view places greater emphasis on the synergy that results from the use of a combination of components in a given situation. Conclusions These conceptual distinctions have many implications for the way CBME is implemented. A conception of competence as the selection of components may lead to a greater emphasis, in a training setting, on the mastery of each component separately. A conception of competence as the use of a combination of components leads to greater emphasis on the synergy that results as they are deployed in clinical situations.
CONTEXT Programmes of assessment should measure the various components of clinical competence. Clinical reasoning has been traditionally assessed using written tests and performance-based tests. The script concordance test (SCT) was developed to assess clinical data interpretation skills. A recent review of the literature examined the validity argument concerning the SCT. Our aim was to provide potential users with evidence-based recommendations on how to construct and implement an SCT. RESULTS The search yielded 848 references, of which 80 were analysed. Studies suggest that tests with around 100 items (25-30 cases), of which 25% are discarded after item analysis, should provide reliable scores. Panels with 10-20 members are needed to reach adequate precision in terms of estimated reliability. Panellists' responses can be analysed by checking for moderate variability among responses. Studies of alternative scoring methods are inconclusive, but the traditional scoring method is satisfactory. There is little evidence on how best to determine a pass ⁄ fail threshold for high-stakes examinations. METHODSCONCLUSIONS Our literature search was broad and included references from medical education journals not indexed in the usual databases, conference abstracts and dissertations. There is good evidence on how to construct and implement an SCT for formative purposes or medium-stakes course evaluations. Further avenues for research include examining the impact of various aspects of SCT construction and implementation on issues such as educational impact, correlations with other assessments, and validity of pass ⁄ fail decisions, particularly for high-stakes examinations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.