2023
DOI: 10.3390/math11071694
|View full text |Cite
|
Sign up to set email alerts
|

LCAM: Low-Complexity Attention Module for Lightweight Face Recognition Networks

Abstract: Inspired by the human visual system to concentrate on the important region of a scene, attention modules recalibrate the weights of either the channel features alone or along with spatial features to prioritize informative regions while suppressing unimportant information. However, the floating-point operations (FLOPs) and parameter counts are considerably high when one is incorporating these modules, especially for those with both channel and spatial attentions in a baseline model. Despite the success of atte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…Table 2 displays the superior performance of our model. Our proposed method improves accuracy by 1.95% over IEFP [18] (i.e., 95.82% → 97.77%) and 4.07% over LCAM [19] (i.e., 93.70% → 97.77%), demonstrating its effectiveness. Based on experimental results, it can be observed that while the Low-Complexity Attention Module (LCAM) approach has a slight advantage over past modules that employed channel and spatial attention, our proposed hybrid channel-spatial mechanism offers higher flexibility.…”
Section: Experiments On Agedb-30 Datasetmentioning
confidence: 72%
See 2 more Smart Citations
“…Table 2 displays the superior performance of our model. Our proposed method improves accuracy by 1.95% over IEFP [18] (i.e., 95.82% → 97.77%) and 4.07% over LCAM [19] (i.e., 93.70% → 97.77%), demonstrating its effectiveness. Based on experimental results, it can be observed that while the Low-Complexity Attention Module (LCAM) approach has a slight advantage over past modules that employed channel and spatial attention, our proposed hybrid channel-spatial mechanism offers higher flexibility.…”
Section: Experiments On Agedb-30 Datasetmentioning
confidence: 72%
“…This method helps to remove age information from facial features and obtain a purer representation of the same. Another method, called Low-Complexity Attention Module (LCAM) [19], uses three parallel branches of the attention mechanism with only one convolutional operation in each branch. It is a lightweight method and shows better performance in face recognition tasks.…”
Section: General Aifr Methodsmentioning
confidence: 99%
See 1 more Smart Citation