2020
DOI: 10.48550/arxiv.2011.14479
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-scale Adaptive Task Attention Network for Few-Shot Learning

Abstract: The goal of few-shot learning is to classify unseen categories with few labeled samples. Recently, the low-level information metric-learning based methods have achieved satisfying performance, since local representations (LRs) are more consistent between seen and unseen classes. However, most of these methods deal with each category in the support set independently, which is not sufficient to measure the relation between features, especially in a certain task. Moreover, the low-level information-based metric l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…The few-shot learning methods can be roughly classified into two categories: meta-learning based methods [26,7,31] and metric-learning based methods [30,32,20,29,3]. Metric-based few-shot learning methods have achieved remarkable success due to their fewer parameters and effectiveness.…”
Section: Input Imagementioning
confidence: 99%
See 4 more Smart Citations
“…The few-shot learning methods can be roughly classified into two categories: meta-learning based methods [26,7,31] and metric-learning based methods [30,32,20,29,3]. Metric-based few-shot learning methods have achieved remarkable success due to their fewer parameters and effectiveness.…”
Section: Input Imagementioning
confidence: 99%
“…However, due to the scarcity of data, it is not sufficient to measure the relation at the image-level [30,32]. Recently, CovaMNet [21], DN4 [20] and MATANet [3] introduce local representations (LRs) into few-shot learning and utilize these LRs to represent the image features, which can achieve better recognition results.…”
Section: Input Imagementioning
confidence: 99%
See 3 more Smart Citations