2023
DOI: 10.3389/fnins.2023.1216181
|View full text |Cite
|
Sign up to set email alerts
|

Multi-scale fusion visual attention network for facial micro-expression recognition

Abstract: IntroductionMicro-expressions are facial muscle movements that hide genuine emotions. In response to the challenge of micro-expression low-intensity, recent studies have attempted to locate localized areas of facial muscle movement. However, this ignores the feature redundancy caused by the inaccurate locating of the regions of interest.MethodsThis paper proposes a novel multi-scale fusion visual attention network (MFVAN), which learns multi-scale local attention weights to mask regions of redundancy features.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 61 publications
0
4
0
Order By: Relevance
“…While the MCNet method on SMIC dataset was less effective than the optimal MFVAN method, 33 it achieved significantly higher recognition scores than MFVAN method on the CASME II dataset. Additionally, on the CASME dataset, the SVM+LCBP-STGCN method 36 outperformed the proposed method in accuracy and F1-score by 10.99% and 0.07, respectively.…”
Section: Results and Analysismentioning
confidence: 91%
See 2 more Smart Citations
“…While the MCNet method on SMIC dataset was less effective than the optimal MFVAN method, 33 it achieved significantly higher recognition scores than MFVAN method on the CASME II dataset. Additionally, on the CASME dataset, the SVM+LCBP-STGCN method 36 outperformed the proposed method in accuracy and F1-score by 10.99% and 0.07, respectively.…”
Section: Results and Analysismentioning
confidence: 91%
“…Specifically, the prediction accuracy reached 65.63%, and the F1-score reached 0.65 on the SMIC dataset, showing 1.03% improvement over the accuracy (64.60%) achieved by the latest dual-ATME method. 33 For the CASME dataset, the prediction accuracy was 70.27%, and the F1-score was 0.7, indicating an improvement of 0.68% in accuracy over the Meta-MMFNet method 24 and an increase of 0.1 in F1-score over the LGCconD method. 47 Moreover, on the CASME II dataset, the prediction accuracy achieved 81.63%, and the F1-score achieved 0.81, which represents an accuracy improvement of 0.68% over the performance of the Meta-MMFNet.…”
Section: Results and Analysismentioning
confidence: 97%
See 1 more Smart Citation
“…Prototype network is a method of less sample learning. Sample less learning is the process of allowing a model to learn a new task or category with only a few samples [21][22][23]. Prototype networks aim to classify or compare between categories by learning to compute a prototype for each category.…”
Section: B Micro-expression Recognition Based On Fau-gcn Prototype Ne...mentioning
confidence: 99%