2020
DOI: 10.1007/s11263-020-01304-3
|View full text |Cite
|
Sign up to set email alerts
|

Deep Neural Network Augmentation: Generating Faces for Affect Analysis

Abstract: This paper presents a novel approach for synthesizing facial affect; either in terms of the six basic expressions (i.e., anger, disgust, fear, joy, sadness and surprise), or in terms of valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the emotion activation). The proposed approach accepts the following inputs: i) a neutral 2D image of a person; ii) a basic facial expression or a pair of valencearousal (VA) emotional state descriptors to be generated, or a path of affect in the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 88 publications
(40 citation statements)
references
References 80 publications
0
37
0
Order By: Relevance
“…The histogram of oriented gradients (HOG) and landmark displacement was used for the feature extraction phase. Kollias et al [ 33 ] proposed a novel technique for synthesising facial expressions and the degree of positive/ negative emotion. Based on the valence-arousal (VA) technique, 600 K frames were annotated from the 4DFAB dataset [ 34 ].…”
Section: D Face Reconstruction Techniquesmentioning
confidence: 99%
“…The histogram of oriented gradients (HOG) and landmark displacement was used for the feature extraction phase. Kollias et al [ 33 ] proposed a novel technique for synthesising facial expressions and the degree of positive/ negative emotion. Based on the valence-arousal (VA) technique, 600 K frames were annotated from the 4DFAB dataset [ 34 ].…”
Section: D Face Reconstruction Techniquesmentioning
confidence: 99%
“…Commonly, attempts of training stronger facial expression recognition models are based on aspects irrelevant to annotations per se. Panda et al propose training on largescale data crawled from the internet [22], while Kollias et al artificially generate facial images [11] to be included in the training set. Studying the temporal dynamics of emotions [1] or applying state-of-the-art deep learning architectures such as Transformers [15] is another line of research.…”
Section: B Affect Recognitionmentioning
confidence: 99%
“…Lindt et al [25] report experiments using the VGGFace, a variant of the VGG16 network pre-trained for face identification. Kollias et al [40] proposed a novel training mechanics, where it augmented the training set of the AffectNet using a generative adversarial network (GAN), and obtained the best reported accuracy on this corpus, achieving 0.54 CCC for arousal and 0.62 CCC for valence. Our FaceChannel provides an improved performance when compared to most of these results achieving a CCC of 0.46 for arousal and 0.61 for valence.…”
Section: Affectnetmentioning
confidence: 99%
“…Our FaceChannel provides an improved performance when compared to most of these results achieving a CCC of 0.46 for arousal and 0.61 for valence. Different from the work of Kollias et al [40], we train our model using only the available training set portion, and expect these results to improve when training on an augmented training set.…”
Section: Affectnetmentioning
confidence: 99%