Since Dr. Howard Barrows (1964) introduced the human standardized patient in 1963, there have been attempts to game a computer-based simulacrum of a patient encounter; the first being a heart attack simulation using the online PLATO system (Bitzer, 1966). With the now ubiquitous use of computers in medicine, interest and effort have expended in the area of Virtual Patients (VPs). One problem in trying to understand VPs is that there are several quite distinct educational approaches that are all called a ‘virtual patient.’ This article is not a general review of virtual patients as current reviews of excellent quality exist (Poulton & Balasubramaniam, 2011; Cook & Triola, 2009). Also, research that demonstrates the efficacy of virtual patients is ample (Triola, et al., 2006). This article assesses the different kinds of things the authors call “virtual patients”, which are often mutually exclusive approaches, then analyzes their interaction structure or ‘game-play’, and considers the best use scenarios for that design strategy. This article also explores dialogue-based conversational agents as virtual patients and the technology approaches to creating them. Finally, the authors offer a theoretical approach that synthesizes several educational approaches over the course of a medical encounter and recommend the optimal technology for the type of encounter desired.
Objective. Multi-patient care is important among medical trainees in an emergency department (ED). While resident efficiency is a typically measured metric, multi-patient care involves both efficiency and diagnostic / treatment accuracy. Multi-patient care ability is difficult to assess, though simulation is a potential alternative. Our objective was to generate validity evidence for a serious game in assessing multi-patient care skills among a variety of learners. Methods. This was a cross-sectional validation study using a digital serious game VitalSignsTM simulating multi-patient care within a pediatric ED. Subjects completed 5 virtual “shifts,” triaging, stabilizing, and discharging or admitting patients within a fixed time period; patients arrived at cascading intervals with pre-programmed deterioration if neglected. Predictor variables included generic multi-tasking ability, video game experience, medical knowledge, and clinical efficiency with real patients. Outcome metrics in 3 domains measured diagnostic accuracy (i.e. critical orders, diagnoses), efficiency (i.e. number of patients, time-to-order) and critical thinking (number of differential diagnoses); MANOVA determined differences between novice learners and expected expert physicians. Spearman Rank correlation determined associations between levels of expertise. Results. Ninety-five subjects’ gameplays were analyzed. Diagnostic accuracy and efficiency distinguished skill level between residency trained (residents, fellows and attendings) and pre-residency trained (medical students and undergraduate) subjects, particularly for critical orders, patients seen, and correct diagnoses (p < 0.003). There were moderate to strong correlations between the game’s diagnostic accuracy and efficiency metrics compared to level of training, including patients seen (rho = 0.47, p < 0.001); critical orders (rho = 0.80, p < 0.001); time-to-order (rho = −0.24, p = 0.025); and correct diagnoses (rho = 0.69, p < 0.001). Video game experience also correlated with patients seen (rho = 0.24, p = 0.003). Conclusion. A digital serious game depicting a busy virtual ED can distinguish between expected experts in multi-patient care at the pre- vs. post-residency level. Further study can focus on whether the game appropriately assesses skill acquisition during residency.
Introduction High-value care (HVC) suggests that good history taking and physical examination should lead to risk stratification that drives the use or withholding of diagnostic testing. This study describes the development of a series of virtual standardized patient (VSP) cases and provides preliminary evidence that supports their ability to provide experiential learning in HVC. Methods This pilot study used VSPs, or natural language processing–based patient avatars, within the USC Standard Patient platform. Faculty consensus was used to develop the cases, including the optimal diagnostic testing strategies, treatment options, and scored content areas. First-year resident physician learners experienced two 90-minute didactic sessions before completing the cases in a computer laboratory, using typed text to interview the avatar for history taking, then completing physical examination, differential diagnosis, diagnostic testing, and treatment modules for each case. Learners chose a primary and 2 alternative “possible” diagnoses from a list of 6 to 7 choices, diagnostic testing options from an extensive list, and treatments from a brief list ranging from 6 to 9 choices. For the history-taking module, both faculty and the platform scored the learners, and faculty assessed the appropriateness of avatar responses. Four randomly selected learner-avatar interview transcripts for each case were double rated by faculty for interrater reliability calculations. Intraclass correlations were calculated for interrater reliability, and Spearman ρ was used to determine the correlation between the platform and faculty ranking of learners' history-taking scores. Results Eight VSP cases were experienced by 14 learners. Investigators reviewed 112 transcripts (4646 learner query-avatar responses). Interrater reliability means were 0.87 for learner query scoring and 0.83 for avatar response. Mean learner success for history taking was scored by the faculty at 57% and by the platform at 51% (ρ correlation of learner rankings = 0.80, P = 0.02). The mean avatar appropriate response rate was 85.6% for all cases. Learners chose the correct diagnosis within their 3 choices 82% of the time, ordered a median (interquartile range) of 2 (2) unnecessary tests and completed 56% of optimal treatments. Conclusions Our avatar appropriate response rate was similar to past work using similar platforms. The simulations give detailed insights into the thoroughness of learner history taking and testing choices and with further refinement should support learning in HVC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.