2 Mozos et al.Stress remains a significant social problem for individuals in modern societies. This paper presents a machine learning approach for the automatic detection of stress of people in a social situation by combining two sensor systems that capture physiological and social responses. We compare the performance using different classifiers including support vector machine, AdaBoost, and k-nearest neighbour. Our experimental results show that by combining the measurements from both sensor systems, we could accurately discriminate between stressful and neutral situations during a controlled Trier social stress test (TSST). Moreover, this paper assesses the discriminative ability of each sensor modality individually and considers their suitability for real time stress detection. Finally, we present an study of the most discriminative features for stress detection.
Artificial intelligence and all its supporting tools, e.g. machine and deep learning in computational intelligence-based systems, are rebuilding our society (economy, education, life-style, etc.) and promising a new era for the social welfare state. In this paper we summarize recent advances in data science and artificial intelligence within the interplay between natural and artificial computation. A review of recent works published in the latter field and the state the art are summarized in a comprehensive and self-contained way to provide a baseline framework for the international community in artificial intelligence. Moreover, this paper aims to provide a complete analysis and some relevant discussions of the current trends and insights within several theoretical and application fields covered in the essay, from theoretical models in
The anticipatory recognition of braking is essential to prevent traffic accidents. For instance, driving assistance systems can be useful to properly respond to emergency braking situations. Moreover, the response time to emergency braking situations can be affected and even increased by different driver's cognitive states caused by stress, fatigue, and extra workload. This work investigates the detection of emergency braking from driver's electroencephalographic (EEG) signals that precede the brake pedal actuation. Bioelectrical signals were recorded while participants were driving in a car simulator while avoiding potential collisions by performing emergency braking. In addition, participants were subjected to stress, workload, and fatigue. EEG signals were classified using support vector machines (SVM) and convolutional neural networks (CNN) in order to discriminate between braking intention and normal driving. Results showed significant recognition of emergency braking intention which was on average 71.1% for SVM and 71.8% CNN. In addition, the classification accuracy for the best participant was 80.1 and 88.1% for SVM and CNN, respectively. These results show the feasibility of incorporating recognizable driver's bioelectrical responses into advanced driver-assistance systems to carry out early detection of emergency braking situations which could be useful to reduce car accidents.
Aim: The research described is intended to give a description of articulation dynamics as a correlate of the kinematic behavior of the jaw-tongue biomechanical system, encoded as a probability distribution of an absolute joint velocity. This distribution may be used in detecting and grading speech from patients affected by neurodegenerative illnesses, as Parkinson Disease.Hypothesis: The work hypothesis is that the probability density function of the absolute joint velocity includes information on the stability of phonation when applied to sustained vowels, as well as on fluency if applied to connected speech.Methods: A dataset of sustained vowels recorded from Parkinson Disease patients is contrasted with similar recordings from normative subjects. The probability distribution of the absolute kinematic velocity of the jaw-tongue system is extracted from each utterance. A Random Least Squares Feed-Forward Network (RLSFN) has been used as a binary classifier working on the pathological and normative datasets in a leave-one-out strategy. Monte Carlo simulations have been conducted to estimate the influence of the stochastic nature of the classifier. Two datasets for each gender were tested (males and females) including 26 normative and 53 pathological subjects in the male set, and 25 normative and 38 pathological in the female set.Results: Male and female data subsets were tested in single runs, yielding equal error rates under 0.6% (Accuracy over 99.4%). Due to the stochastic nature of each experiment, Monte Carlo runs were conducted to test the reliability of the methodology. The average detection results after 200 Montecarlo runs of a 200 hyperplane hidden layer RLSFN are given in terms of Sensitivity (males: 0.9946, females: 0.9942), Specificity (males: 0.9944, females: 0.9941) and Accuracy (males: 0.9945, females: 0.9942). The area under the ROC curve is 0.9947 (males) and 0.9945 (females). The equal error rate is 0.0054 (males) and 0.0057 (females).Conclusions: The proposed methodology avails that the use of highly normalized descriptors as the probability distribution of kinematic variables of vowel articulation stability, which has some interesting properties in terms of information theory, boosts the potential of simple yet powerful classifiers in producing quite acceptable detection results in Parkinson Disease.
Emotion estimation systems based on brain and physiological signals such as electro encephalography (EEG), blood-volume pressure (BVP), and galvanic skin response (GSR) are gaining special attention in recent years due to the possibilities they offer. The field of human–robot interactions (HRIs) could benefit from a broadened understanding of the brain and physiological emotion encoding, together with the use of lightweight software and cheap wearable devices, and thus improve the capabilities of robots to fully engage with the users emotional reactions. In this paper, a previously developed methodology for real-time emotion estimation aimed for its use in the field of HRI is tested under realistic circumstances using a self-generated database created using dynamically evoked emotions. Other state-of-the-art, real-time approaches address emotion estimation using constant stimuli to facilitate the analysis of the evoked responses, remaining far from real scenarios since emotions are dynamically evoked. The proposed approach studies the feasibility of the emotion estimation methodology previously developed, under an experimentation paradigm that imitates a more realistic scenario involving dynamically evoked emotions by using a dramatic film as the experimental paradigm. The emotion estimation methodology has proved to perform on real-time constraints while maintaining high accuracy on emotion estimation when using the self-produced dynamically evoked emotions multi-signal database.
Affective human-robot interaction requires lightweight software and cheap wearable devices that could further this field. However, the estimation of emotions in real-time poses a problem that has not yet been optimized. An optimization is proposed for the emotion estimation methodology including artifact removal, feature extraction, feature smoothing, and brain pattern classification. The challenge of filtering artifacts and extracting features, while reducing processing time and maintaining high accuracy results, is attempted in this work. First, two different approaches for real-time electro-oculographic artifact removal techniques are tested and compared in terms of loss of information and processing time. Second, an emotion estimation methodology is proposed based on a set of stable and meaningful features, a carefully chosen set of electrodes, and the smoothing of the feature space. The methodology has proved to perform on real-time constraints while maintaining high accuracy on emotion estimation on the SEED database, both under subject dependent and subject independent paradigms, to test the methodology on a discrete emotional model with three affective states.
Visual neuroprosthesis, that provide electrical stimulation along several sites of the human visual system, constitute a potential tool for vision restoration for the blind. Scientific and technological progress in the fields of neural engineering and artificial vision comes with new theories and tools that, along with the dawn of modern artificial intelligence, constitute a promising framework for the further development of neurotechnology. In the framework of the development of a Cortical Visual Neuroprosthesis for the blind (CORTIVIS), we are now facing the challenge of developing not only computationally powerful tools and flexible approaches that will allow us to provide some degree of functional vision to individuals who are profoundly blind. In this work, we propose a general neuroprosthesis framework composed of several task-oriented and visual encoding modules. We address the development and implementation of computational models of the firing rates of retinal ganglion cells and design a tool — Neurolight — that allows these models to be interfaced with intracortical microelectrodes in order to create electrical stimulation patterns that can evoke useful perceptions. In addition, the developed framework allows the deployment of a diverse array of state-of-the-art deep-learning techniques for task-oriented and general image pre-processing, such as semantic segmentation and object detection in our system’s pipeline. To the best of our knowledge, this constitutes the first deep-learning-based system designed to directly interface with the visual brain through an intracortical microelectrode array. We implement the complete pipeline, from obtaining a video stream to developing and deploying task-oriented deep-learning models and predictive models of retinal ganglion cells’ encoding of visual inputs under the control of a neurostimulation device able to send electrical train pulses to a microelectrode array implanted at the visual cortex.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.