Background
Basic life support (BLS) is crucial in the emergency response system, as sudden cardiac arrest is still a major cause of death worldwide. Unfortunately, only a minority of victims receive cardiopulmonary resuscitation (CPR) from bystanders. In this context, training could be helpful to save more lives, and technology-enhanced BLS simulation is one possible solution.
Objective
The aim of this study is to assess the feasibility and acceptability of our augmented reality (AR) prototype as a tool for BLS training.
Methods
Holo-BLSD is an AR self-instruction training system, in which a standard CPR manikin is “augmented” with an interactive virtual environment that reproduces realistic scenarios. Learners can use natural gestures, body movements, and spoken commands to perform their tasks, with virtual 3D objects anchored to the manikin and the environment. During the experience, users were trained to use the device while being guided through an emergency simulation and, at the end, were asked to complete a survey to assess the feasibility and acceptability of the proposed tool (5-point Likert scale; 1=Strongly Disagree, 5=Strongly Agree).
Results
The system was rated easy to use (mean 4.00, SD 0.94), and the trainees stated that most people would learn to use it very quickly (mean 4.00, SD 0.89). Voice (mean 4.48, SD 0.87), gaze (mean 4.12, SD 0.97), and gesture interaction (mean 3.84, SD 1.14) were judged positively, although some hand gesture recognition errors reduced the feeling of having the right level of control over the system (mean 3.40, SD 1.04).
Conclusions
We found the Holo-BLSD system to be a feasible and acceptable tool for AR BLS training.
Event cameras are novel bio-inspired sensors, which asynchronously capture pixel-level intensity changes in the form of "events". The innovative way they acquire data presents several advantages over standard devices, especially in poor lighting and high-speed motion conditions. However, the novelty of these sensors results in the lack of a large amount of training data capable of fully unlocking their potential. The most common approach implemented by researchers to address this issue is to leverage simulated event data. Yet, this approach comes with an open research question: how well simulated data generalize to real data? To answer this, we propose to exploit, in the event-based context, recent Domain Adaptation (DA) advances in traditional computer vision, showing that DA techniques applied to event data help reduce the sim-to-real gap. To this purpose, we propose a novel architecture, which we call Multi-View DA4E (MV-DA4E), that better exploits the peculiarities of frame-based event representations while also promoting domain invariant characteristics in features.Through extensive experiments, we prove the effectiveness of DA methods and MV-DA4E on N-Caltech101. Moreover, we validate their soundness in a real-world scenario through a cross-domain analysis on the popular RGB-D Object Dataset (ROD), which we extended to the event modality (RGB-E).*This paper is currently under review. It was partially supported by the ERC project RoboExNovo. Computational resources were partially provided by IIT.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.