The script concordance test (SCT) is used in health professions education to assess a specific facet of clinical reasoning competence: the ability to interpret medical information under conditions of uncertainty. Grounded in established theoretical models of knowledge organization and clinical reasoning, the SCT has three key design features: (1) respondents are faced with ill-defined clinical situations and must choose between several realistic options; (2) the response format reflects the way information is processed in challenging problem-solving situations; and (3) scoring takes into account the variability of responses of experts to clinical situations. SCT scores are meant to reflect how closely respondents' ability to interpret clinical data compares with that of experienced clinicians in a given knowledge domain. A substantial body of research supports the SCT's construct validity, reliability, and feasibility across a variety of health science disciplines, and across the spectrum of health professions education from pre-clinical training to continuing professional development. In practice, its performance as an assessment tool depends on careful item development and diligent panel selection. This guide, intended as a primer for the uninitiated in SCT, will cover the basic tenets, theoretical underpinnings, and construction principles governing script concordance testing.
Medical Education 2012: 46: 357–365 Context Current debate in medical education focuses on the nature of ‘competency‐based medical education’ (CBME) and whether or not it should be adopted. Many medical schools claim to run ‘competency‐based’ curricula, but the structure of their programmes can differ radically. A review of the existing CBME literature reveals that little attention has been paid to defining the concept of competence. A straightforward examination of what is meant by the term ‘competence’ is noticeably missing from the literature, despite its impact on medical training. Objectives This paper aims to illustrate the varying conceptions of ‘competence’ by comparing and contrasting definitions provided in the health sciences education literature and discussing their respective impacts on medical education. Methods A systematic review of recent publications in medical education journals published in English and French was conducted to extract definitions of competence or, if definitions were not explicitly stated, to derive the authors’ implicit conception of competence. A sample of 14 definitions from articles in the health sciences education field was studied using thematic analysis. Results There is agreement that competence is composed of knowledge, skills and other components. Although agreement about the nature of these other components is lacking, attitudes and values are suggested to be essential ingredients of competence. Furthermore, a clear divergence in conceptions of how a competent person utilises these components is apparent. One view specifies that competence involves selecting components according to specific situations, as required. A second view places greater emphasis on the synergy that results from the use of a combination of components in a given situation. Conclusions These conceptual distinctions have many implications for the way CBME is implemented. A conception of competence as the selection of components may lead to a greater emphasis, in a training setting, on the mastery of each component separately. A conception of competence as the use of a combination of components leads to greater emphasis on the synergy that results as they are deployed in clinical situations.
CONTEXT Programmes of assessment should measure the various components of clinical competence. Clinical reasoning has been traditionally assessed using written tests and performance-based tests. The script concordance test (SCT) was developed to assess clinical data interpretation skills. A recent review of the literature examined the validity argument concerning the SCT. Our aim was to provide potential users with evidence-based recommendations on how to construct and implement an SCT. RESULTS The search yielded 848 references, of which 80 were analysed. Studies suggest that tests with around 100 items (25-30 cases), of which 25% are discarded after item analysis, should provide reliable scores. Panels with 10-20 members are needed to reach adequate precision in terms of estimated reliability. Panellists' responses can be analysed by checking for moderate variability among responses. Studies of alternative scoring methods are inconclusive, but the traditional scoring method is satisfactory. There is little evidence on how best to determine a pass ⁄ fail threshold for high-stakes examinations. METHODSCONCLUSIONS Our literature search was broad and included references from medical education journals not indexed in the usual databases, conference abstracts and dissertations. There is good evidence on how to construct and implement an SCT for formative purposes or medium-stakes course evaluations. Further avenues for research include examining the impact of various aspects of SCT construction and implementation on issues such as educational impact, correlations with other assessments, and validity of pass ⁄ fail decisions, particularly for high-stakes examinations.
Background: Clinical reasoning is at the core of health professionals' practice. A mapping of what constitutes clinical reasoning could support the teaching, development, and assessment of clinical reasoning across the health professions. Methods: We conducted a scoping study to map the literature on clinical reasoning across health professions literature in the context of a larger Best Evidence Medical Education (BEME) review on clinical reasoning assessment. Seven databases were searched using subheadings and terms relating to clinical reasoning, assessment, and Health Professions. Data analysis focused on a comprehensive analysis of bibliometric characteristics and the use of varied terminology to refer to clinical reasoning. Results: Literature identified: 625 papers spanning 47 years (1968-2014), in 155 journals, from 544 first authors, across eighteen Health Professions. Thirty-seven percent of papers used the term clinical reasoning; and 110 other terms referring to the concept of clinical reasoning were identified. Consensus on the categorization of terms was reached for 65 terms across six different categories: reasoning skills, reasoning performance, reasoning process, outcome of reasoning, context of reasoning, and purpose/goal of reasoning. Categories of terminology used differed across Health Professions and publication types. Discussion: Many diverse terms were present and were used differently across literature contexts. These terms likely reflect different operationalisations, or conceptualizations, of clinical reasoning as well as the complex, multi-dimensional nature of this concept. We advise authors to make the intended meaning of 'clinical reasoning' and associated terms in their work explicit in order to facilitate teaching, assessment, and research communication.
Background: Script theory proposes an explanation for how information is stored in and retrieved from the human mind to influence individuals’ interpretation of events in the world. Applied to medicine, script theory focuses on knowledge organization as the foundation of clinical reasoning during patient encounters. According to script theory, medical knowledge is bundled into networks called ‘illness scripts’ that allow physicians to integrate new incoming information with existing knowledge, recognize patterns and irregularities in symptom complexes, identify similarities and differences between disease states, and make predictions about how diseases are likely to unfold. These knowledge networks become updated and refined through experience and learning. The implications of script theory on medical education are profound. Since clinician-teachers cannot simply transfer their customized collections of illness scripts into the minds of learners, they must create opportunities to help learners develop and fine-tune their own sets of scripts. In this essay, we provide a basic sketch of script theory, outline the role that illness scripts play in guiding reasoning during clinical encounters, and propose strategies for aligning teaching practices in the classroom and the clinical setting with the basic principles of script theory.
In order to improve the current state of affairs in the management of clinical reasoning difficulties, a collective paradigm shift is required to alter the perception of residency as an apprenticeship to one of residency as a structured educational programme. Faculty development programmes should be designed in an integrated way so that they not only develop clinical educators' skills, but also modify their beliefs.
There are many obstacles to the timely identification of clinical reasoning difficulties in health professions education. This guide aims to provide readers with a framework for supervising clinical reasoning and identifying the potential difficulties as they may occur at each step of the reasoning process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.