2023
DOI: 10.1007/978-3-031-25075-0_3
|View full text |Cite
|
Sign up to set email alerts
|

Facial Affect Recognition Using Semi-supervised Learning with Adaptive Threshold

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…As one can see, smoothing does not work for AU detection but can significantly improve the results for other tasks: up to 0.06 difference in F1-score for EXPR classification and up to 0.06 difference in mean CCC for VA prediction. Moreover, the smoothing works nicely even for blending the best ---0.981 SMMEmotionNet [23] 0.3648 0.2617 0.4737 1.1002 Two-Aspect Information Interaction [31] 0.515 0.207 0.385 1.107 SS-MFAR [4] 0.397 0.235 0.493 1.125 EfficientNet-B2 [27] 0.384 0.302 0.461 1.147 MAE+ViT [20] 0.4588 0.3028 0.5054 1.2671 Cross-attentive module [23] 0.499 0.333 0.456 1.288 MT-EmotiEffNet + OpenFace [29] 0.447 0.357 0.496 1.300 MAE+Transformer [37] 0 frame-level 0.4847 0.3578 0.5194 1.3619 + MT-EmotiDDAMFN smoothing, tAU = 0.5 0.5578 0.4168 0.5194 1.4939 models (Fig. 5).…”
Section: Multi-task Learning Challengementioning
confidence: 99%
See 1 more Smart Citation
“…As one can see, smoothing does not work for AU detection but can significantly improve the results for other tasks: up to 0.06 difference in F1-score for EXPR classification and up to 0.06 difference in mean CCC for VA prediction. Moreover, the smoothing works nicely even for blending the best ---0.981 SMMEmotionNet [23] 0.3648 0.2617 0.4737 1.1002 Two-Aspect Information Interaction [31] 0.515 0.207 0.385 1.107 SS-MFAR [4] 0.397 0.235 0.493 1.125 EfficientNet-B2 [27] 0.384 0.302 0.461 1.147 MAE+ViT [20] 0.4588 0.3028 0.5054 1.2671 Cross-attentive module [23] 0.499 0.333 0.456 1.288 MT-EmotiEffNet + OpenFace [29] 0.447 0.357 0.496 1.300 MAE+Transformer [37] 0 frame-level 0.4847 0.3578 0.5194 1.3619 + MT-EmotiDDAMFN smoothing, tAU = 0.5 0.5578 0.4168 0.5194 1.4939 models (Fig. 5).…”
Section: Multi-task Learning Challengementioning
confidence: 99%
“…The two-aspect information interaction model [31] represents interactions between sign vehicles and messages. The SS-MFAR [4] extracts facial features using ResNet and leverages adaptive threshold for every class of facial expressions. The thresholds were estimated based on semi-supervised learning.…”
Section: Related Workmentioning
confidence: 99%