2023
DOI: 10.3390/e25030460
|View full text |Cite
|
Sign up to set email alerts
|

Dual-ATME: Dual-Branch Attention Network for Micro-Expression Recognition

Abstract: Micro-expression recognition (MER) is challenging due to the difficulty of capturing the instantaneous and subtle motion changes of micro-expressions (MEs). Early works based on hand-crafted features extracted from prior knowledge showed some promising results, but have recently been replaced by deep learning methods based on the attention mechanism. However, with limited ME sample sizes, features extracted by these methods lack discriminative ME representations, in yet-to-be improved MER performance. This pap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 56 publications
0
2
0
Order By: Relevance
“…The framework of the proposed MER model is illustrated in Figure 1. Specifically, this scheme adopts a dual-branch framework [28], where one branch sends the face image to a Swin Transformer after optical flow processing to extract the temporal-spatial information of ME, and the other branch sends the apex frame image to MobileViT to acquire the local-global information. More importantly, the multiple-mode features from the two branches interact through the CAB module for adaptive learning fusion.…”
Section: Network Architecturementioning
confidence: 99%
“…The framework of the proposed MER model is illustrated in Figure 1. Specifically, this scheme adopts a dual-branch framework [28], where one branch sends the face image to a Swin Transformer after optical flow processing to extract the temporal-spatial information of ME, and the other branch sends the apex frame image to MobileViT to acquire the local-global information. More importantly, the multiple-mode features from the two branches interact through the CAB module for adaptive learning fusion.…”
Section: Network Architecturementioning
confidence: 99%
“…In addition to handling discriminative representations of micro-expression sequences from a dynamic feature perspective, deep learning methods based on attention mechanisms have also demonstrated success. [31][32][33][34] For instance, Zhou et al 32 proposed the dual-branch attention network (dual-ATME), comprising hand-crafted attention region selection (HARS) and automated attention region selection (AARS). The HARS manually extracted features from the ROI using prior knowledge, while AARS automatically extracted hidden information from the sequence based on attention mechanisms.…”
Section: Introductionmentioning
confidence: 99%
“…In addition to handling discriminative representations of micro-expression sequences from a dynamic feature perspective, deep learning methods based on attention mechanisms have also demonstrated success 31 34 For instance, Zhou et al 32 . proposed the dual-branch attention network (dual-ATME), comprising hand-crafted attention region selection (HARS) and automated attention region selection (AARS).…”
Section: Introductionmentioning
confidence: 99%