We investigate the efficacy of incorporating real-time feedback of user performance within mixed-reality environments (MREs) for training real-world tasks with tightly coupled cognitive and psychomotor components. This paper presents an approach to providing real-time evaluation and visual feedback of learner performance in an MRE for training clinical breast examination (CBE). In a user study of experienced and novice CBE practitioners (n = 69), novices receiving real-time feedback performed equivalently or better than more experienced practitioners in the completeness and correctness of the exam. A second user study (n = 8) followed novices through repeated practice of CBE in the MRE. Results indicate that skills improvement in the MRE transfers to the real-world task of CBE of human patients. This initial case study demonstrates the efficacy of MREs incorporating real-time feedback for training real-world cognitive-psychomotor tasks.
Virtual human (VH) experiences are receiving increased attention for training real-world interpersonal scenarios. Communication in interpersonal scenarios consists of not only speech and gestures, but also relies heavily on haptic interactioninterpersonal touch. By adding haptic interaction to VH experiences, the bandwidth of human-VH communication can be increased to approach that of human-human communication.To afford haptic interaction, a new species of embodied agent is proposed -mixed reality humans (MRHs). A MRH is a virtual human embodied by a tangible interface that shares the same registered space. The tangible interface affords the haptic interaction that is critical to effective simulation of interpersonal scenarios. We applied MRHs to simulate a virtual patient requiring a breast cancer screening (medical interview and physical exam). The design of the MRH patient is presented. This paper also presents the results of a pilot study in which eight (n = 8) physician-assistant students performed a clinical breast exam on the MRH patient. Results show that when afforded haptic interaction with a MRH patient, users demonstrated interpersonal touch and social engagement similarly to interacting with a human patient. INTRODUCTIONVirtual human (VH) experiences are increasingly being used for training real-human interpersonal scenarios, for example, military leadership [13] and doctor-patient interviews [15]. These human-VH interactions simulate a human-human interaction by providing two-way verbal and gestural communication. Prior research using these systems has shown that the efficacy of a VH experience would be significantly enhanced by integrating the haptic component of interpersonal communication [15]. This would, in effect, increase the bandwidth of human-VH communication.We expand on current VH experiences by affording haptic interaction with the VH. This paper proposes a new species of embodied agent that affords haptic interaction by combining virtual and real spaces -mixed reality humans.A mixed reality human (MRH) is a virtual human with a physical embodiment in the form of a tangible interface. By merging virtual and real spaces, MRHs afford haptic interaction between human and VH (Figure 1).Mixed reality humans allow for: 1. Interpersonal touch between the human and VH.Interpersonal touch is a critical component of non-verbal communication which affects how people perceive those they communicate with, increases information flow, and aids in conveying empathy [11,7]. Affording haptic interaction with a VH will allow VH experiences to more accurately and effectively simulate interpersonal communication. 2. VH experiences to train interpersonal scenarios which require interpersonal touch. Without affording touch, the domain of current VH experiences is limited. By affording haptic interaction, VH experiences can simulate a wider range of real-human interpersonal scenarios, such as medical physical exams. This paper presents the design of a MRH breast exam patient and the results of a pilot...
This paper presents Mixed Reality Humans (MRHs), a new type of embodied agent enabling touch-driven communication. Affording touch between human and agent allows MRHs to simulate interpersonal scenarios in which touch is crucial. Two studies provide initial evaluation of user behavior with a MRH patient and the usability and acceptability of a MRH patient for practice and evaluation of medical students' clinical skills. In Study I (n=8) it was observed that students treated MRHs as social actors more than students in prior interactions with virtual human patients (n=27), and used interpersonal touch to comfort and reassure the MRH patient similarly to prior interactions with human patients (n=76). In the within-subjects Study II (n=11), medical students performed a clinical breast exam on each of a MRH and human patient. Participants performed equivalent exams with the MRH and human patients, demonstrating the usability of MRHs to evaluate students' exam skills. The acceptability of the MRH patient for practicing exam skills was high as students rated the experience as believable and educationally beneficial. Acceptability was improved from Study I to Study II due to an increase in the MRH's visual realism, demonstrating that visual realism is critical for simulation of specific interpersonal scenarios.
This paper proposes an approach to mixed environment training of manual tasks requiring concurrent use of psychomotor and cognitive skills. To train concurrent use of both skill sets, the learner is provided real-time generated, in-situ presented visual feedback of her performance.This feedback provides reinforcement and correction of psychomotor skills concurrently with guidance in developing cognitive models of the task.The general approach is presented: 1) Sensors placed in the physical environment detect in real-time a learner's manipulation of physical objects. 2) Sensor data is input to models of task performance which output quantitative measures of the learner's performance. 3) Pre-defined rules are applied to transform the learner's performance data into visual feedback presented in realtime and in-situ with the physical objects being manipulated.With guidance from medical education experts, we have applied this approach to a mixed environment for learning clinical breast exams (CBEs). CBE belongs to a class of tasks that require learning multiple cognitive elements and task-specific psychomotor skills. Traditional approaches to learning CBEs and other joint psychomotor-cognitive tasks rely on extensive one-onone training with an expert providing subjective feedback. By integrating real-time visual feedback of learners' quantitatively measured CBE performance, a mixed environment for learning CBEs provides on-demand learning opportunities with more objective, detailed feedback than available with expert observation. The proposed approach applied to learning CBEs was informally evaluated by four expert medical educators and six novice medical students. This evaluation highlights that receiving real-time in-situ visual feedback of their performance provides students an advantage, over traditional approaches to learning CBEs, in developing correct psychomotor and cognitive skills.
This paper proposes virtual social perspective-taking (VSP). In VSP, users are immersed in an experience of another person to aid in understanding the person's perspective. Users are immersed by 1) providing input to user senses from logs of the target person's senses, 2) instructing users to act and interact like the target, and 3) reminding users that they are playing the role of the target. These guidelines are applied to a scenario where taking the perspective of others is crucial -the medical interview. A pilot study (n = 16) using this scenario indicates VSP elicits reflection on the perspectives of others and changes behavior in future, similar social interactions. By encouraging reflection and change, VSP advances the state-ofthe-art in training social interactions with virtual experiences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.