Most human-drone interfaces, such as joysticks and remote controllers, require attention and developed skills during teleoperation. Wearable interfaces could enable a more natural and intuitive control of drones, which would make this technology accessible to a larger population of users. In this letter, we describe a soft exoskeleton, so called FlyJacket, designed for naïve users that want to control a drone with upper body gestures in an intuitive manner. The exoskeleton includes a motion-tracking device to monitor body movements, an arm support system to prevent fatigue, and is coupled to goggles for first-person-view from the drone perspective. Tests were performed with participants flying a simulated fixed-wing drone moving at a constant speed; participants' performance was more consistent when using the FlyJacket with the arm support than when performing the same task with a remote controller. Furthermore, participants felt more immersed, had more sensation of flying, and reported less fatigue when the arm support was enabled. The FlyJacket has been demonstrated for the teleoperation of a real drone. Index Terms-Human-robot interaction, telerobotics and teleoperation, virtual reality and interfaces, wearable robots.
The use of drones in search and rescue (SAR) missions can be very cognitively demanding. Since high levels of cognitive workload can negatively affect human performance, there is a risk of compromising the mission and leading to failure with catastrophic outcomes. Therefore, cognitive workload monitoring is the key to prevent the rescuers from taking dangerous decisions. Due to the difficulties of gathering data during real SAR missions, we rely on virtual reality. In this work, we use a simulator to induce three levels of cognitive workload related to SAR missions with drones. To detect cognitive workload, we extract features from different physiological signals, such as electrocardiogram, respiration, skin temperature, and photoplethysmography. We propose a recursive feature elimination method that combines the use of both an eXtreme Gradient Boosting (XGBoost) algorithm and the SHapley Additive exPlanations (SHAP) score to select the more representative features. Moreover, we address both a binary and a three-class detection approaches. To this aim, we investigate the use of different machine-learning algorithms, such as XGBoost, random forest, decision tree, k-nearest neighbors, logistic regression, linear discriminant analysis, gaussian naïve bayes, and support vector machine. Our results show that on an unseen test set extracted from 24 volunteers, an XGBoost with 24 features for discrimination reaches an accuracy of 80.2% and 62.9% in order to detect two and three levels of cognitive workload, respectively. Finally, our results are open the doors to a fine grained cognitive workload detection in the field of SAR missions.
High levels of cognitive workload decreases human's performance and leads to failures with catastrophic outcomes in risky missions. Today, reliable cognitive workload detection presents a common major challenge, since the workload is not directly observable. However, cognitive workload affects several physiological signals that can be measured noninvasively. The main goal of this work is to develop a reliable machine learning algorithm to identify the cognitive workload induced during rescue missions, which is evaluated through drone control simulation experiments. In addition, we aim to minimize the computing resources usage while maximizing the cognitive workload detection accuracy for a reliable real-time operation. We perform an experiment in which 24 subjects played a rescue mission simulator while respiration, electrocardiogram, photoplethysmogram, and skin temperature signals were measured. State-of-the-art feature-based machine learning algorithms are investigated for cognitive workload characterization using learning curves, data augmentation, and cross-validation techniques. The best classification algorithm is selected, optimized, and the most informative features are selected. Finally, the generalization power of the optimized model is evaluated on an unseen test set. We obtain an accuracy level of 86% on the new unseen datasets using the proposed and optimized eXtreme Gradient Boosting (XGB) algorithm. Then, we reduce the complexity of the machine learning model for future implementation on resource-constrained wearable embedded systems, by optimizing the model and selecting the 26 most important features. Overall, a generalizable and lowcomplexity machine learning model for cognitive workload detection based on physiological signals is presented for the first time in the literature.
Monitoring stress and, in general, emotions has attracted a lot of attention over the past few decades. Stress monitoring has many applications, including high-risk missions and surgical procedures as well as mental/emotional health monitoring. In this paper, we evaluate the possibility of stress and emotion monitoring using off-the-shelf wearable sensors. To this aim, we propose a multi-modal machine-learning technique for acute stress episodes detection, by fusing the information careered in several biosignals and wearable sensors. Furthermore, we investigate the contribution of each wearable sensor in stress detection and demonstrate the possibility of acute stress recognition using wearable devices. In particular, we acquire the physiological signals using the Shimmer3 ECG Unit and the Empatica E4 wristband. Our experimental evaluation shows that it is possible to detect acute stress episodes with an accuracy of 84.13%, for an unseen test set, using multi-modal machinelearning and sensor-fusion techniques.
The use of drones is recently gaining particular interest in the field of search and rescue. However, particular skills are still required to actively operate in a mission without crashing the drone. This limits their effective and efficient employment in real missions. Thus, to assist the rescuers operating in stressful conditions, there is a need to detect an increase of workload that could compromise the outcome of the mission. In this work a simulator is designed and used to induce different levels of cognitive workload related to search and rescue missions. Physiological signals are recorded and features are extracted from them to estimate cognitive workloads. The NASA Task Load Index is used as subjective self-report workload reference. Then, performance is recorded to objectively evaluate the execution of the tasks. Finally, the analysis of variance (ANOVA) is used to verify intra-and inter-subject variability. Results show statistical decrease of the mean normal-to-normal (NN) interval with an increase of cognitive workload. Moreover, it is observed a decrease of performance while an increase of cognitive workload exists. This information can be used to detect the need for assistance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.