2022
DOI: 10.1109/lsp.2022.3180934
|View full text |Cite
|
Sign up to set email alerts
|

Shaping Visual Representations With Attributes for Few-Shot Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 26 publications
0
0
0
Order By: Relevance
“…Our experiments are conducted using the convolution neural network ResNet12 (Chen et al 2021b), a popular feature extractor in recent few-shot learning methods. We also give our results using Conv4 (Vinyals et al 2016) on SUN for fair comparisons since we find that almost all previous works (Huang et al 2021;Ji et al 2022;Chen et al 2022;Xing et al 2019) choose Conv4 as their feature extractor on SUN. Moreover, we use a simple fullyconnected layer as the learnable weight generator G, and the temperature parameter τ 1 and τ 2 are initialized as 10.…”
Section: Experiments Experimental Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…Our experiments are conducted using the convolution neural network ResNet12 (Chen et al 2021b), a popular feature extractor in recent few-shot learning methods. We also give our results using Conv4 (Vinyals et al 2016) on SUN for fair comparisons since we find that almost all previous works (Huang et al 2021;Ji et al 2022;Chen et al 2022;Xing et al 2019) choose Conv4 as their feature extractor on SUN. Moreover, we use a simple fullyconnected layer as the learnable weight generator G, and the temperature parameter τ 1 and τ 2 are initialized as 10.…”
Section: Experiments Experimental Setupmentioning
confidence: 99%
“…Based on the above analysis, our motivation is to explicitly learn some fine-grained and transferable metaknowledge from base classes and then reuse the learned meta-knowledge to recognize novel classes with only a few labeled examples. As some previous works (Huang et al 2021;Tokmakov, Wang, and Hebert 2019;Ji et al 2022;Xing et al 2019;Chen et al 2022), our work also utilizes category-level attribute annotations, i.e., only one attribute score vector for each class. AM3 (Xing et al 2019) proposes a modality mixture mechanism that can adaptively combine visual and semantic information.…”
Section: Introductionmentioning
confidence: 99%