The link between effective basic life support (BLS) and survival following cardiac arrest is well known. Nurses are often first responders at in-hospital cardiac arrests and receive annual BLS training to ensure they have the adequate skills, and student nurses are taught this in preparation for their clinical practice. However, it is clear that some nurses still lack confidence and skills to perform BLS in an emergency situation. This innovative study included 209 participants, used a mixed methods approach and examined three environments to compare confidence and skills in BLS training. The environments were non-immersive (basic skills room), immersive, (the immersive room with video technology), and the Octave (mixed reality facility).The skills were measured using a Laerdal training manikin (QCPR manikin), with data recorded on a wireless Laerdal Simpad, and the pre and post confidence levels were measured using a questionnaire.The non-immersive and the immersive room rooms were familiar environments and the students felt more comfortable and relaxed and thus more confident. The Octave offered the higher level of simulation utilizing Virtual Reality (VR) technology. Students felt less comfortable and less confident in the Octave; we assert that this was because the environment was unfamiliar. The study identified that placing students in an unfamiliar environment influences the confidence and skills associated with BLS; this could be used as a way of preparing students / nurses with the necessary emotional resilience to cope in stressful situations.
Supporting a wide set of linked non-verbal resources remains an evergreen challenge for communication technology, limiting effectiveness in many applications. Interpersonal distance, gaze, posture and facial expression, are interpreted together to manage and add meaning to most conversations. Yet today's technologies favor some above others. This induces confusion in conversations, and is believed to limit both feelings of togetherness and trust, and growth of empathy and rapport. Solving this problem will allow technologies to support most rather than a few interactional scenarios. It is likely to benefit teamwork and team cohesion, distributed decision-making and health and wellbeing applications such as tele-therapy, tele-consultation, and isolation. We introduce withyou, our telepresence research platform. This paper describes the end-to-end system including the psychology of human interaction and how this drives requirements throughout the design and implementation. Our technology approach is to combine the winning characteristics of video conferencing and immersive collaborative virtual environments. This is to allow, for example, people walking past each other to exchange a glance and smile. A systematic explanation of the theory brings together the linked nature of non-verbal communication and how it is influenced by technology. This leads to functional requirements for telepresence, in terms of the balance of visual, spatial and temporal qualities. The first end-to-end description of withyou describes all major processes and the display and capture environment. An unprecedented characterization of our approach is given in terms of the above qualities and what influences them. This leads to non-functional requirements in terms of number and place of cameras and the avoidance of resultant bottlenecks. Proposals are given for improved distribution of processes across networks, computers, and multi-core CPU and GPU. Simple conservative estimation shows that both approaches should meet our requirements. One is implemented and shown to meet minimum and come close to desirable requirements.
This paper presents a Mixed Reality system that results from the integration of a telepresence system and an application to improve collaborative space exploration. The system combines free viewpoint video with immersive projection technology to support non-verbal communication, including eye gaze, inter-personal distance and facial expression. Importantly, these can be interpreted together as people move around the simulation, maintaining natural social distance. The application is a simulation of Mars, within which the collaborators must come to agreement over, for example, where the Rover should land and go. The first contribution is the creation of a Mixed Reality system supporting contextualization of non-verbal communication. Two technological contributions are prototyping a technique to subtract a person from a background that may contain physical objects and/or moving images, and a light weight texturing method for multi-view rendering which provides balance in terms of visual and temporal quality. A practical contribution is the demonstration of pragmatic approaches to sharing space between display systems of distinct levels of immersion. A research tool contribution is a system that allows comparison of conventional authored and video based reconstructed avatars, within an environment that encourages exploration and social interaction. Aspects of system quality, including the communication of facial expression and end-to-end latency are reported.
R e m ovi n g t h e m a s k -d o p e o pl e ov e r t r u s t a v a t a r s r e c o n s t r u c t e d fro m vid e o ?Abstract. This experiment compared the detection of deceit across video conferencing and a fixed viewpoint 3D video based computer graphic medium. The purpose was to determine if the process of 3D reconstruction influenced trust by reducing detail of facial expression. Comparison with the literature investigates the impact of facial expression on trust. Inspiration comes from previous studies in the natural and virtual world that suggest a stronger tendency to over trust a person when their facial expression is hidden. A virtual avatar that copies head and eye movement but not that of the face, could be argued as akin to a person wearing a mask. Thus, our opening research question is: Would a 3D medium that removed this mask result in a truth bias similar to video and therefore real world? Two confederates each gave a set of accounts of which half were true. These were captured and transmitted simultaneously in real time using 2D and full 3D video based communication mediums. Recordings of these sessions were later examined by two sets of participants. Twenty-one participants were asked to determine which accounts were true. Measures included: accuracy at detecting truth and deceit, and from this tendency to over trust and lastly cognitive effort in determining truthfulness. Results show that participants performed and worked to a similar degree in both mediums. Findings are of interest to those developing 3D telepresence technologies and virtual humans, and to those concerned with the trustworthiness of a medium.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.