Abstract. Research activities in the field of human-computer interaction increasingly addressed the aspect of integrating some type of emotional intelligence. Human emotions are expressed through different modalities such as speech, facial expressions, hand or body gestures, and therefore the classification of human emotions should be considered as a multimodal pattern recognition problem. The aim of our paper is to investigate multiple classifier systems utilizing audio and visual features to classify human emotional states. For that a variety of features have been derived. From the audio signal the fundamental frequency, LPCand MFCC coefficients, and RASTA-PLP have been used. In addition to that two types of visual features have been computed, namely form and motion features of intermediate complexity. The numerical evaluation has been performed on the four emotional labels Arousal, Expectancy, Power, Valence as defined in the AVEC data set. As classifier architectures multiple classifier systems are applied, these have been proven to be accurate and robust against missing and noisy data.
sential roles in my academic journey while completing my Ph.D. dissertation. Their support, mentorship, and presence have been invaluable, and I am deeply appreciative. I must begin by acknowledging the support of my family: Edalat, Parivash, and Milad. Their belief in my aspirations and constant encouragement has strengthened me throughout this demanding journey.I am profoundly grateful to my distinguished Ph.D. committee, particularly Professor Friedhelm Schwenker, to whom I owe special thanks for his mentorship.
In our modern industrial society the group of the older (generation 65+) is constantly growing. Many subjects of this group are severely affected by their health and are suffering from disability and pain. The problem with chronic illness and pain is that it lowers the patient's quality of life, and therefore accurate pain assessment is needed to facilitate effective pain management and treatment. In the future, automatic pain monitoring may enable health care professionals to assess and manage pain in a more and more objective way. To this end, the goal of our SenseEmotion project is to develop automatic painand emotion-recognition systems for successful assessment and effective personalized management of pain, particularly for the generation 65+. In this paper the recently created SenseEmotion Database for pain-vs. emotion-recognition is presented. Data of 45 healthy subjects is collected to this database. For each subject approximately 30 min of multimodal sensory data has been recorded. For a comprehensive understanding of pain and affect three rather different modalities of data are included in this study: biopotentials, camera images of the facial region, and, for the first time, audio signals. Heat stimulation is applied to elicit pain, and affective image stimuli accompanied by sound stimuli are used for the elicitation of emotional states.
Within the past decade, many computational approaches have been developed to estimate gaze directions of persons based on their facial appearance. Most researchers used common face datasets with only a limited representation of different head poses to train and verify their algorithms. Moreover, in most datasets, faces have neither a defined gaze direction, nor do they incorporate different combinations of eye gaze and head pose. Therefore, we recorded an extended dataset of 20 subjects including faces in various combinations of head pose and eye gaze leading to a total amount of 2220 colour images (111 per subject). The images were produced under controlled conditions, i.e. we used a technique to make sure that the subjects adjust their head and eyes appropriately. Furthermore, all images are manually labelled with landmarks indicating important features of the face. Finally, we evaluate the dataset with two computational methods for head pose and gaze estimation respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.