Recently, brain-computer interface (BCI) research has extended to investigate its possible use in motor rehabilitation. Most of these investigations have focused on the upper body. Only few studies consider gait because of the difficulty of recording EEG during gross movements. However, for stroke patients the rehabilitation of gait is of crucial importance. Therefore, this study investigates if a BCI can be based on walking related desynchronization features. Furthermore, the influence of complexity of the walking movements on the classification performance is investigated. Two BCI experiments were conducted in which healthy subjects performed a cued walking task, a more complex walking task (backward or adaptive walking), and imagination of the same tasks. EEG data during these tasks was classified into walking and no-walking. The results from both experiments show that despite the automaticity of walking and recording difficulties, brain signals related to walking could be classified rapidly and reliably. Classification performance was higher for actual walking movements than for imagined walking movements. There was no significant increase in classification performance for both the backward and adaptive walking tasks compared with the cued walking tasks. These results are promising for developing a BCI for the rehabilitation of gait.
Facial expressions are behavioural cues that represent an affective state. Because of this, they are an unobtrusive alternative to affective self-report. The perceptual identification of facial expressions can be performed automatically with technological assistance. Once the facial expressions have been identified, the interpretation is usually left to a field expert. However, facial expressions do not always represent the felt affect; they can also be a communication tool. Therefore, facial expression measurements are prone to the same biases as self-report. Hence, the automatic measurement of human affect should also make inferences on the nature of the facial expressions instead of describing facial movements only. We present two experiments designed to assess whether such automated inferential judgment could be advantageous. In particular, we investigated the differences between posed and spontaneous smiles. The aim of the first experiment was to elicit both types of expressions. In contrast to other studies, the temporal dynamics of the elicited posed expression were not constrained by the eliciting instruction. Electromyography (EMG) was used to automatically discriminate between them. Spontaneous smiles were found to differ from posed smiles in magnitude, onset time, and onset and offset speed independently of the producer’s ethnicity. Agreement between the expression type and EMG-based automatic detection reached 94% accuracy. Finally, measurements of the agreement between human video coders showed that although agreement on perceptual labels is fairly good, the agreement worsens with inferential labels. A second experiment confirmed that a layperson’s accuracy as regards distinguishing posed from spontaneous smiles is poor. Therefore, the automatic identification of inferential labels would be beneficial in terms of affective assessments and further research on this topic.
Distal facial Electromyography (EMG) can be used to detect smiles and frowns with reasonable accuracy. It capitalizes on volume conduction to detect relevant muscle activity, even when the electrodes are not placed directly on the source muscle. The main advantage of this method is to prevent occlusion and obstruction of the facial expression production, whilst allowing EMG measurements. However, measuring EMG distally entails that the exact source of the facial movement is unknown. We propose a novel method to estimate specific Facial Action Units (AUs) from distal facial EMG and Computer Vision (CV). This method is based on Independent Component Analysis (ICA), Non-Negative Matrix Factorization (NNMF), and sorting of the resulting components to determine which is the most likely to correspond to each CV-labeled action unit (AU). Performance on the detection of AU06 (Orbicularis Oculi) and AU12 (Zygomaticus Major) was estimated by calculating the agreement with Human Coders. The results of our proposed algorithm showed an accuracy of 81% and a Cohen's Kappa of 0.49 for AU6; and accuracy of 82% and a Cohen's Kappa of 0.53 for AU12. This demonstrates the potential of distal EMG to detect individual facial movements. Using this multimodal method, several AU synergies were identified. We quantified the co-occurrence and timing of AU6 and AU12 in posed and spontaneous smiles using the human-coded labels, and for comparison, using the continuous CV-labels. The co-occurrence analysis was also performed on the EMG-based labels to uncover the relationship between muscle synergies and the kinematics of visible facial movement.
Facial neuromuscular electrical stimulation (fNMES), which allows for the non-invasive and physiologically sound activation of facial muscles, has great potential for investigating fundamental questions in psychology and neuroscience, such as the role of proprioceptive facial feedback in emotion induction and emotion recognition, as well as for clinical applications, such as alleviating depression symptoms. However, despite illustrious origins in 19th century work of Duchenne de Boulogne, the practical application of fNMES remains largely unknown to researchers in psychology and human physiology. In addition, published studies vary dramatically in use and reporting of parameters, such as stimulation frequency, amplitude, duration, and electrode size. Because fNMES parameters impact the comfort and safety of volunteers, as well as its physiological (and psychological) effects, it is of paramount importance to establish recommendations of good practice. Here, we provide an introduction to fNMES, a systematic review of the existing literature focusing on stimulation parameters used, and we offer recommendations on how to safely and reliably deliver fNMES. In addition, we provide a free webpage, allowing to easily verify and compare the safety of fNMES parameters based on current density. As an example of a potential application, we focus on the use of fNMES for the investigation of the facial feedback hypothesis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.