2021
DOI: 10.48550/arxiv.2111.07224
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Local Multi-Head Channel Self-Attention for Facial Expression Recognition

Abstract: Since the Transformer architecture was introduced in 2017 there has been many attempts to bring the selfattention paradigm in the field of computer vision. In this paper we propose a novel self-attention module that can be easily integrated in virtually every convolutional neural network and that is specifically designed for computer vision, the LHC: Local (multi) Head Channel (self-attention). LHC is based on two main ideas: first, we think that in computer vision the best way to leverage the self-attention p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…The author obtains the best accuracy of 76.40% 23 on RAF-DB with Tokens-to-Tokens ViT model and 77.33% on balanced FER2013 dataset with Mobile ViT. 24 The author in 25 used Local Head Channel (LHC) Self-attention in combination with a pre-trained ResNet34v2 trained on FER2013, as a backbone. The idea behind using local heads is that they work at a much lower dimension and they are efficient.…”
Section: Related Workmentioning
confidence: 99%
“…The author obtains the best accuracy of 76.40% 23 on RAF-DB with Tokens-to-Tokens ViT model and 77.33% on balanced FER2013 dataset with Mobile ViT. 24 The author in 25 used Local Head Channel (LHC) Self-attention in combination with a pre-trained ResNet34v2 trained on FER2013, as a backbone. The idea behind using local heads is that they work at a much lower dimension and they are efficient.…”
Section: Related Workmentioning
confidence: 99%
“…Accuracy rate CNN [38] 72.16 ResNet [39] 72.4 VGGNet [40] 73.28 DeepEmotion [41] 70.02 SVM [42] 71.16 SE-Net50 [43] 72.7 LHC-Net [44] 74.42 ResNet18 64.8 The proposed model 72.7…”
Section: Modelmentioning
confidence: 99%