2019
DOI: 10.1016/j.jvcir.2019.05.009
|View full text |Cite
|
Sign up to set email alerts
|

Two-level attention with two-stage multi-task learning for facial emotion recognition

Abstract: Hu Min)As the one of most powerful and natural signals of expressing emotion states [1], facial emotions account for the 55% role of emotional information [2]. Due to the influence of many factors, such as different subjects, races, illumination, complex background and so on, facial emotion analysis is a indubitable challenging task. Most of the previous researches [3,4,5] were based on data in laboratory-controlled environment which can avoid many factors mentioned above with the limitation of the number of d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 49 publications
(22 citation statements)
references
References 48 publications
0
22
0
Order By: Relevance
“…CCC is used to compare the fitting degree of curve, and RMSE is sensitive to outliers. Our CCC performance is ordinary while RMSE performance is better, indicating that the generalization ability of the model is better than the Reference [17]. Barros et al[31] uses a neural model based on conditional antagonistic auto-encoder to perform the continuous emotional estimation.…”
Section: Framework Performance Experimental Results Arementioning
confidence: 85%
See 2 more Smart Citations
“…CCC is used to compare the fitting degree of curve, and RMSE is sensitive to outliers. Our CCC performance is ordinary while RMSE performance is better, indicating that the generalization ability of the model is better than the Reference [17]. Barros et al[31] uses a neural model based on conditional antagonistic auto-encoder to perform the continuous emotional estimation.…”
Section: Framework Performance Experimental Results Arementioning
confidence: 85%
“…According to Ref. [17], we use the entire neuronal layer of each model as the feature instead of two values and optimize their weights at the same time, thus, making better use of the representation relations of different models. Then predict valence and arousal two values.…”
Section: System Structurementioning
confidence: 99%
See 1 more Smart Citation
“…Also, Ngo, et al [20] use deep transfer learning techniques by using a squeeze-and-excitation network (SENet) model SE-ResNet-50 which pretrained for using the largest dataset for human face VGGFace2 and proposes a new loss function and named weighted-cluster loss. Also W. Xiaohua, et al [21] propose a two-level attention network for facial expression recognition in a static image, the first level used to extract the position of features while the second level is a Bidirectional Recurrent Neural Network for utilizing the relation between all features between all layers.…”
Section: Related Workmentioning
confidence: 99%
“…The conventional facial recognition system with PCA is really a simple face recognition approach and data compression method. Still, the lighting conditions are not sensitive [12]. The LDA has been one of the commonly utilized projection techniques which would be effective in mapping high-dimensional measurements into the low-dimensional space.…”
Section: Introductionmentioning
confidence: 99%