2018
DOI: 10.1109/taffc.2016.2643661
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging the Bayesian Filtering Paradigm for Vision-Based Facial Affective State Estimation

Abstract: Estimating a person's affective state from facial information is an essential capability for social interaction. Automatizing such a capability has therefore increasingly driven multidisciplinary research for the past decades. At the heart of this issue are very challenging signal processing and artificial intelligence problems driven by the inherent complexity of human affect. We therefore propose a principled framework for designing automated systems capable of continuously estimating the human affective sta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(15 citation statements)
references
References 48 publications
(56 reference statements)
0
15
0
Order By: Relevance
“…The aim of this section is to explore system design and maximized system performance for a robust ECP, based on empirical experience in dealing with the emotion prediction problem. One may be interested in using emotion changes to facilitate absolute emotion predictions, such as Huang and Epps (2017) and Oveneke et al (2017) in which Kalman filtering was used. However, an important premise could be accurate predictions of emotion change, which is the only focus of this study.…”
Section: Emotion Change Prediction (Ecp)mentioning
confidence: 99%
See 1 more Smart Citation
“…The aim of this section is to explore system design and maximized system performance for a robust ECP, based on empirical experience in dealing with the emotion prediction problem. One may be interested in using emotion changes to facilitate absolute emotion predictions, such as Huang and Epps (2017) and Oveneke et al (2017) in which Kalman filtering was used. However, an important premise could be accurate predictions of emotion change, which is the only focus of this study.…”
Section: Emotion Change Prediction (Ecp)mentioning
confidence: 99%
“…Secondly, another concern was deriving delta emotion ground truth from the absolute ratings, as raised in Nicolaou et al (2011). Although the delta emotion ground truth achieved acceptable inter-rater agreement, it should instead be ideally annotated in a relative manner for preserving more characteristics of emotion dynamics, as discussed in Nicolaou et al (2011) and Oveneke et al (2017). However, where annotations of emotion changes are not available, and re-annotating data in a relative way could be labor-demanding and time-consuming, deriving relative labels from the absolutes as we proposed herein could be a reasonable compromise.…”
Section: Limitationsmentioning
confidence: 99%
“…Research efforts on data such as this [10,11,16] often try to "detect" what emotions subjects were feeling during these videos (are these "Fear" responses? "Nervous" responses?…”
Section: Designmentioning
confidence: 99%
“…There is much existing research in the fields of affective computing and psychology which aims to predict what people are thinking and feeling from their facial expressions and other physiological data [10,11,17]. Much of it is based on the highly problematic "Classical Theory of Emotions", in which emotions are believed to be essential, discrete reactions of our bodies to changes in our environment [4], and therefore should be detectable in individuals by a machine once it learns what that emotion looks like for a general population.…”
Section: Introductionmentioning
confidence: 99%
“…Their approaches to this end involve utilizing dynamic features [15], applying dynamic models [10], and analyzing dynamic patterns from classifiers [7], [8] or regressors [11]. In continuous emotion prediction, some recent work has derived emotion dynamic estimates from arousal and valence ratings using first-order differences [12], [15], [16].…”
Section: Introductionmentioning
confidence: 99%