2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020) 2020
DOI: 10.1109/fg47880.2020.00093
|View full text |Cite
|
Sign up to set email alerts
|

Affective Expression Analysis in-the-wild using Multi-Task Temporal Statistical Deep Learning Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 8 publications
0
7
0
Order By: Relevance
“…On the one hand, self-reporting emotions and objectively measured emotions are essentially different emotional attributes. On the other hand, we can interpret such differences as an inability of objective assessment to recognize all subjective characteristics of the emotions expressed on faces [40]. However, the method of self-reporting is simple.…”
Section: The Differences Between the 2 Emotional Measurement Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…On the one hand, self-reporting emotions and objectively measured emotions are essentially different emotional attributes. On the other hand, we can interpret such differences as an inability of objective assessment to recognize all subjective characteristics of the emotions expressed on faces [40]. However, the method of self-reporting is simple.…”
Section: The Differences Between the 2 Emotional Measurement Methodsmentioning
confidence: 99%
“…Then, we captured the facial movements of the participants with camera videos. To recognize facial expressions of the video we captured, we used the face recognition model proposed by Do et al, 2020 [40]. After we input the videos to the model, the valence-arousal emotion values were output for the subsequent analysis.…”
Section: Physiological Measuresmentioning
confidence: 99%
See 1 more Smart Citation
“…Several multi-task learning models [7,9,26] effectively leveraged the availability of Aff-wild2 data jointly annotated with the labels of dimensional affect, categorical expressions, and AUs. A holistic multi-task, multi-domain network for facial emotion analysis named FaceBehavior-Net was developed on Aff-wild2 and validated in a crosscorpus setting in [19,20,23].…”
Section: Related Workmentioning
confidence: 99%
“…The third ABAW Competition, to be held in conjunction with the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2022 is a continuation of the first [24] and second [32] ABAW Competitions held in conjunction with the IEEE Conference on Face and Gesture Recognition (IEEE FG) 2021 and with the International Conference on Computer Vision (ICCV) 2022, respectively, which targeted dimensional (in terms of valence and arousal) [2][3][4]8,9,11,21,35,39,47,48,50,[54][55][56], categorical (in terms of the basic expressions) [12,15,16,33,36,37,51] and facial action unit analysis and recognition [7,19,20,25,26,40,44,47]. The third ABAW Competition contains four Challenges, which are based on the same in-the-wild database, (i) the uni-task Valence-Arousal Estimation Challenge; (ii) the uni-task Expression Classification Challenge (for the 6 basic expressions plus the neutral state plus the 'other' category that denotes expressions/affective states other than the 6 basic ones); (iii) the uni-task Action Unit Detection Challenge (for 12 action units); (iv) the Multi-Task Learning Challenge (for joint learning and predicting of valence-arousal, 8 expressions -6 basic plus neutral plus 'other'-and 12 action units).…”
Section: Introductionmentioning
confidence: 99%