2020 IEEE International Conference on Multimedia and Expo (ICME) 2020
DOI: 10.1109/icme46284.2020.9102833
|View full text |Cite
|
Sign up to set email alerts
|

Joint Facial Action Unit Intensity Prediction And Region Localisation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 43 publications
0
3
0
Order By: Relevance
“…Conventional methods [15], [14] mainly design handcrafted features as the input of a classifier for AU recognition. With the development of deep learning, methods [16], [12], [17] have raised the performance of AU analysis to a new height. Since AUs are dynamic processes, methods such as [12] employ LSTM to model local temporal relations, but fail in extending to longer temporal range.…”
Section: Related Workmentioning
confidence: 99%
“…Conventional methods [15], [14] mainly design handcrafted features as the input of a classifier for AU recognition. With the development of deep learning, methods [16], [12], [17] have raised the performance of AU analysis to a new height. Since AUs are dynamic processes, methods such as [12] employ LSTM to model local temporal relations, but fail in extending to longer temporal range.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, a oneway ANOVA was conducted to compare the effect of face coverings on the achieved categorization accuracy for the visor, sunglasses, fully covering, and partial mask conditions. We found that although the proposed method achieved better accuracy than end-to-end deep networks and other attention-based methods, face coverings still have a significant effect on the accuracy of the method at the α < .05 for the four covering conditions F (3,116) Hence, the use of fully transparent visors is beneficial if those are considered protective. It will not only provide protection against infectious disease but also help to improve the interaction between artificial systems and humans as people like to interact with social robots that can identify emotion [42,284], rather than with robots that cannot identify emotions.…”
Section: Attention-based Methodsmentioning
confidence: 87%
“…Much research has been carried out to categorize emotion from images and video databases [12,116,440,338] in recent years. However, the task CHAPTER 1.…”
Section: Emotion Categorization From Image and Video-frame Databasesmentioning
confidence: 99%