2022
DOI: 10.1007/978-3-031-19806-9_26
|View full text |Cite
|
Sign up to set email alerts
|

Improving Fine-Grained Visual Recognition in Low Data Regimes via Self-boosting Attention Mechanism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…2) Ambiguous decision boundary among AD subtypes: The decision boundary between different AD stages' subjects is more ambiguous than natural image-based classification problems. For example, according the ADNI differentiate criterion [72], the Mini-Mental State Examination (MMSE) score ranges of NC, MCI, and AD are [24][25][26][27][28][29][30], [24][25][26][27][28][29][30], and [20][21][22][23][24][25][26], respectively, and there obviously exists overlaps between different stages, as shown in Fig. A1.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…2) Ambiguous decision boundary among AD subtypes: The decision boundary between different AD stages' subjects is more ambiguous than natural image-based classification problems. For example, according the ADNI differentiate criterion [72], the Mini-Mental State Examination (MMSE) score ranges of NC, MCI, and AD are [24][25][26][27][28][29][30], [24][25][26][27][28][29][30], and [20][21][22][23][24][25][26], respectively, and there obviously exists overlaps between different stages, as shown in Fig. A1.…”
Section: Discussionmentioning
confidence: 99%
“…Due to the small inter-class variations between fine-grained categories and the large intra-class variations in individual differences [24], numerous researchers have paid their attention on the fine-grained classification field [25], [26], [27], [28]. However, existing state-of-the-art (SOTA) fine-grained classification methods mainly focus on fine-grained labeled data to mine discriminative information, which can be roughly divided into two general categories [24]: recognition by localization-classification subnetworks [25], [26], [27], [28], and recognition by end-to-end feature encoding [29].…”
Section: B Fine-grained Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…It allows neural networks to concentrate on essential parts of the information being processed while neglecting insignificant or irrelevant parts. To date, the attention mechanism has been widely applied in multiple deep-learning domains such as natural language processing, computer vision, and speech recognition, including classic structures like SE-Net [28], ECA-Net [32], CBAM [33], and SAM [34]. Xu et al [35] were the first to employ the attention mechanism for the resolution of computer vision problems in 2015, laying a solid foundation for the mechanism's development within computer vision.…”
Section: Attention Mechanismmentioning
confidence: 99%