The script concordance test (SCT) is used in health professions education to assess a specific facet of clinical reasoning competence: the ability to interpret medical information under conditions of uncertainty. Grounded in established theoretical models of knowledge organization and clinical reasoning, the SCT has three key design features: (1) respondents are faced with ill-defined clinical situations and must choose between several realistic options; (2) the response format reflects the way information is processed in challenging problem-solving situations; and (3) scoring takes into account the variability of responses of experts to clinical situations. SCT scores are meant to reflect how closely respondents' ability to interpret clinical data compares with that of experienced clinicians in a given knowledge domain. A substantial body of research supports the SCT's construct validity, reliability, and feasibility across a variety of health science disciplines, and across the spectrum of health professions education from pre-clinical training to continuing professional development. In practice, its performance as an assessment tool depends on careful item development and diligent panel selection. This guide, intended as a primer for the uninitiated in SCT, will cover the basic tenets, theoretical underpinnings, and construction principles governing script concordance testing.
CONTEXT Script concordance test (SCT) scores are intended to reflect respondents' competence in interpreting clinical data under conditions of uncertainty. The validity of inferences based on SCT scores has not been rigorously established.OBJECTIVES This study was conducted in order to develop a structured validity argument for the interpretation of test scores derived through use of the script concordance method. METHODSWe searched the PubMed, EMBASE and PsycINFO databases for articles pertaining to script concordance testing. We then reviewed these articles to evaluate the construct validity of the script concordance method, following an established approach for analysing validity data from five categories: content; response process; internal structure; relations to other variables, and consequences.RESULTS Content evidence derives from clear guidelines for the creation of authentic, ill-defined scenarios. High internal consistency reliability supports the internal structure of SCT scores. As might be expected, SCT scores correlate poorly with assessments of pure factual knowledge, in which correlations for more advanced learners are lower. The validity of SCT scores is weakly supported by evidence pertaining to examinee response processes and educational consequences.CONCLUSIONS Published research generally supports the use of SCT to assess the interpretation of clinical data under conditions of uncertainty, although specifics of the validity argument vary and require verification in different contexts and for particular SCTs. Our review identifies potential areas of further validity inquiry in all five categories of evidence. In particular, future SCT research might explore the impact of the script concordance method on teaching and learning, and examine how SCTs integrate with other assessment methods within comprehensive assessment programmes.
An unprecedented rise in health professions education (HPE) research has led to increasing attention and interest in knowledge syntheses. There are many different types of knowledge syntheses in common use, including systematic reviews, meta-ethnography, rapid reviews, narrative reviews, and realist reviews. In this Perspective, the authors examine the nature, purpose, value, and appropriate use of one particular method: scoping reviews. Scoping reviews are iterative and flexible and can serve multiple main purposes: to examine the extent, range, and nature of research activity in a given field; to determine the value and appropriateness of undertaking a full systematic review; to summarize and disseminate research findings; and to identify research gaps in the existing literature. Despite the advantages of this methodology, there are concerns that it is a less rigorous and defensible means to synthesize HPE literature. Drawing from published research and from their collective experience with this methodology, the authors present a brief description of scoping reviews, explore the advantages and disadvantages of scoping reviews in the context of HPE, and offer lessons learned and suggestions for colleagues who are considering conducting scoping reviews. Examples of published scoping reviews are provided to illustrate the steps involved in the methodology.
The MOT model of clinical reasoning processes has potentially important applications for use within undergraduate and graduate medical curricula to inform teaching, learning and assessment. Specifically, it could be used to support curricular development because it can help to identify opportune moments for learning specific elements of clinical reasoning. It could also be used to precisely identify and remediate reasoning errors in students, residents and practising doctors with persistent difficulties in clinical reasoning.
Background: Script theory proposes an explanation for how information is stored in and retrieved from the human mind to influence individuals’ interpretation of events in the world. Applied to medicine, script theory focuses on knowledge organization as the foundation of clinical reasoning during patient encounters. According to script theory, medical knowledge is bundled into networks called ‘illness scripts’ that allow physicians to integrate new incoming information with existing knowledge, recognize patterns and irregularities in symptom complexes, identify similarities and differences between disease states, and make predictions about how diseases are likely to unfold. These knowledge networks become updated and refined through experience and learning. The implications of script theory on medical education are profound. Since clinician-teachers cannot simply transfer their customized collections of illness scripts into the minds of learners, they must create opportunities to help learners develop and fine-tune their own sets of scripts. In this essay, we provide a basic sketch of script theory, outline the role that illness scripts play in guiding reasoning during clinical encounters, and propose strategies for aligning teaching practices in the classroom and the clinical setting with the basic principles of script theory.
Background: Clinical reasoning is at the core of health professionals' practice. A mapping of what constitutes clinical reasoning could support the teaching, development, and assessment of clinical reasoning across the health professions. Methods: We conducted a scoping study to map the literature on clinical reasoning across health professions literature in the context of a larger Best Evidence Medical Education (BEME) review on clinical reasoning assessment. Seven databases were searched using subheadings and terms relating to clinical reasoning, assessment, and Health Professions. Data analysis focused on a comprehensive analysis of bibliometric characteristics and the use of varied terminology to refer to clinical reasoning. Results: Literature identified: 625 papers spanning 47 years (1968-2014), in 155 journals, from 544 first authors, across eighteen Health Professions. Thirty-seven percent of papers used the term clinical reasoning; and 110 other terms referring to the concept of clinical reasoning were identified. Consensus on the categorization of terms was reached for 65 terms across six different categories: reasoning skills, reasoning performance, reasoning process, outcome of reasoning, context of reasoning, and purpose/goal of reasoning. Categories of terminology used differed across Health Professions and publication types. Discussion: Many diverse terms were present and were used differently across literature contexts. These terms likely reflect different operationalisations, or conceptualizations, of clinical reasoning as well as the complex, multi-dimensional nature of this concept. We advise authors to make the intended meaning of 'clinical reasoning' and associated terms in their work explicit in order to facilitate teaching, assessment, and research communication.
This transformation method proposes a common metric basis for reporting SCT scores and provides examinees with clear, interpretable insights into their performance relative to that of physicians of the field. We recommend reporting SCT scores with the mean and standard deviation of panel scores set at standard scores of 80 and 5, respectively. Beyond SCT, our transformation method may be generalizable to the scoring of other test formats in which the performance of examinees and those of a panel of reference undertaking the same cognitive tasks are compared.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.