2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2016
DOI: 10.1109/cvprw.2016.183
|View full text |Cite
|
Sign up to set email alerts
|

A Framework for Joint Estimation and Guided Annotation of Facial Action Unit Intensity

Abstract: Manual annotation of facial action units (AUs) is highly tedious and time-consuming. Various methods for automatic coding of AUs have been proposed, however, their performance is still far below of that attained by expert human coders. Several attempts have been made to leverage these methods to reduce the burden of manual coding of AU activations (presence/absence). Nevertheless, this has not been exploited in the context of AU intensity coding, which is a far more difficult task. To this end, we propose an e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…In this study, we have explored the development and application of an emotion detection system using a convolutional neural network (CNN) approach. [14] Our investigation yielded promising results, demonstrating the efficacy of CNN architectures in accurately classifying seven basic emotions: anger, disgust, fear, happiness, sadness, surprise, and neutral. Through rigorous experimentation and validation, we have showcased the potential of deep learning techniques in capturing complex facial expressions and discerning subtle emotional cues.…”
Section: Discussionmentioning
confidence: 70%
“…In this study, we have explored the development and application of an emotion detection system using a convolutional neural network (CNN) approach. [14] Our investigation yielded promising results, demonstrating the efficacy of CNN architectures in accurately classifying seven basic emotions: anger, disgust, fear, happiness, sadness, surprise, and neutral. Through rigorous experimentation and validation, we have showcased the potential of deep learning techniques in capturing complex facial expressions and discerning subtle emotional cues.…”
Section: Discussionmentioning
confidence: 70%
“…To model relationships among multiple AUs, Sandbach et al [40] proposed a tree-structured Markov Random Field model to capture relationships among intensities. Walecki et al introduced copula functions to jointly estimate the intensities of multiple AUs in [49] and [48]. Kaltwang et al [13] proposed a latent tree model and learned tree structure from input features and labels.…”
Section: Related Work a Shallow Models For Au Intensity Estimationmentioning
confidence: 99%