2022
DOI: 10.48550/arxiv.2202.10659
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Multi-Task Learning Challenges

Abstract: This paper describes the third Affective Behavior Analysis in-the-wild (ABAW) Competition, held in conjunction with IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2022. The 3rd ABAW Competition is a continuation of the Competitions held at ICCV 2021, IEEE FG 2020 and IEEE CVPR 2017 Conferences, and aims at automatically analyzing affect. This year the Competition encompasses four Challenges: i) uni-taskValence-Arousal Estimation, ii) uni-task Expression Classification, iii) un… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(29 citation statements)
references
References 43 publications
0
29
0
Order By: Relevance
“…CCC-Valence CCC-Arousal PVA Baseline(ResNet50) [5] 0.31 0.17 0.24 Ours 0.26 0.19 0.225 (CCC) [13]. Through evaluating the validation set of the challenge data, using the proposed approach, we achieved a CCC score of 0.26 and 0.19 for valence and arousal, respectively.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…CCC-Valence CCC-Arousal PVA Baseline(ResNet50) [5] 0.31 0.17 0.24 Ours 0.26 0.19 0.225 (CCC) [13]. Through evaluating the validation set of the challenge data, using the proposed approach, we achieved a CCC score of 0.26 and 0.19 for valence and arousal, respectively.…”
Section: Methodsmentioning
confidence: 99%
“…Srivastava et al [14] used facial action units and gaze vectors for recognizing affect in the wild. Motivated by these methods, we adopt a similar approach to the arousal and valence track of the 3rd Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW) [5][6][7][8][9][10][11][12]15]. We propose to use hand-crafted features from the cropped frames in the dataset to train a random forest regressor to predict valence and arousal.…”
Section: Introductionmentioning
confidence: 99%
“…We have further compared our fusion model with that of other valid submissions for the third ABAW challenge [12] on the test set as shown in Table 3. The winner of the challenge [28] also uses A-V fusion and showed outstanding performance for both valence and arousal.…”
Section: Comparison To State-of-the-artmentioning
confidence: 99%
“…Several approaches have been proposed for previous challenges in the framework of multi-task learning [14,15,17,18]. In continuation with the previous challenges, third competition was held in conjunction with CVPR 2022 [12] with an exclusive challenge track for valence and arousal estimation.…”
Section: Introductionmentioning
confidence: 99%
“…As one can notice, most successful previous solutions use MTL [10,24] to boost their performance. As a result, the authors of the third ABAW contest [7] decided to inspire researchers studying not only MTL, but also the uni-task models. The baseline uses the very large VGG16 convolutional neural network (CNN) pre-trained on the VGGFACE dataset to make a decision in all tasks independently [7].…”
Section: Introductionmentioning
confidence: 99%