This paper is the basis paper for the accepted IJCNN challenge One-Minute Gradual-Emotion Recognition (OMG-Emotion) 1 by which we hope to foster long-emotion classification using neural models for the benefit of the IJCNN community. The proposed corpus has as novelty the data collection and annotation strategy based on emotion expressions which evolve over time into a specific context. Different from other corpora, we propose a novel multimodal corpus for emotion expression recognition, which uses gradual annotations with a focus on contextual emotion expressions. Our dataset was collected from Youtube videos using a specific search strategy based on restricted keywords and filtering which guaranteed that the data follow a gradual emotion expression transition, i.e. emotion expressions evolve over time in a natural and continuous fashion. We also provide an experimental protocol and a series of unimodal baseline experiments which can be used to evaluate deep and recurrent neural models in a fair and standard manner.
Current Facial Expression Recognition (FER) approaches tend to be insensitive to individual differences in expression and interaction contexts. They are unable to adapt to the dynamics of real-world environments where data is only available incrementally, acquired by the system during interactions. In this paper, we propose a novel continual learning framework with imagination for FER (CLIFER) that (i) implements imagination to simulate expression data for particular subjects and integrates it with (ii) a complementary learning-based dual-memory (episodic and semantic) model, to augment person-specific learning. The framework is evaluated on its ability to remember previously seen classes as well as on generalising to yet unseen classes, resulting in high F1-scores for multiple FER datasets:
Real-world application require affect perception models to be sensitive to individual differences in expression. As each user is different and expresses differently, these models need to personalise towards each individual to adequately capture their expressions and thus model their affective state. Despite high performance on benchmarks, current approaches fall short in such adaptation. In this dissertation, we propose the use of continual learning for affective computing as a paradigm for developing personalised affect perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.