2019 IEEE International Conference on Multimedia &Amp; Expo Workshops (ICMEW) 2019
DOI: 10.1109/icmew.2019.00041
|View full text |Cite
|
Sign up to set email alerts
|

Self-Attention Relation Network for Few-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(14 citation statements)
references
References 4 publications
0
13
0
Order By: Relevance
“…We evaluate the proposed MCRNet for few-shot classification tasks on the unseen meta-testing set and compare it with the state-of-the-art methods. 4CONV 58.9 ± 1.9 71.5 ± 1.0 --R2D2 [21] 4CONV 65.3 ± 0.2 79.4 ± 0.1 --Fine-tuning [32] ResNet-12 64.66 ± 0.73 82.13 ± 0.50 37.52 ± 0.53 55.39 ± 0.57 TADAM [22] ResNet-12 --40.1 ± 0.4 56.1 ± 0.4 MTL [8] ResNet-12 ¶ --43.6 ± 1.8 55.4 ± 0.9 Baseline-RR [29] ResNet [18] ResNet-101 ¶ 51.62 ± 0.31 66.16 ± 0.51 DN4 [33] ResNet-12 54.37 ± 0.36 74.44 ± 0.29 SNAIL [26] ResNet-12 55.71 ± 0.99 68.88 ± 0.92 Fine-tuning [32] ResNet-12 56.67 ± 0.62 74.80 ± 0.51 TADAM [22] ResNet-12 ¶ 58.50 ± 0.30 76.70 ± 0.30 CAML [34] ResNet-12 59.23 ± 0.99 72.35 ± 0.71 TPN [35] ResNet-12 59.46 75.65 wDAE-GNN [36] WRN-28-10 61.07 ± 0.15 76.75 ± 0.11 MTL [8] ResNet-12 ¶ 61.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We evaluate the proposed MCRNet for few-shot classification tasks on the unseen meta-testing set and compare it with the state-of-the-art methods. 4CONV 58.9 ± 1.9 71.5 ± 1.0 --R2D2 [21] 4CONV 65.3 ± 0.2 79.4 ± 0.1 --Fine-tuning [32] ResNet-12 64.66 ± 0.73 82.13 ± 0.50 37.52 ± 0.53 55.39 ± 0.57 TADAM [22] ResNet-12 --40.1 ± 0.4 56.1 ± 0.4 MTL [8] ResNet-12 ¶ --43.6 ± 1.8 55.4 ± 0.9 Baseline-RR [29] ResNet [18] ResNet-101 ¶ 51.62 ± 0.31 66.16 ± 0.51 DN4 [33] ResNet-12 54.37 ± 0.36 74.44 ± 0.29 SNAIL [26] ResNet-12 55.71 ± 0.99 68.88 ± 0.92 Fine-tuning [32] ResNet-12 56.67 ± 0.62 74.80 ± 0.51 TADAM [22] ResNet-12 ¶ 58.50 ± 0.30 76.70 ± 0.30 CAML [34] ResNet-12 59.23 ± 0.99 72.35 ± 0.71 TPN [35] ResNet-12 59.46 75.65 wDAE-GNN [36] WRN-28-10 61.07 ± 0.15 76.75 ± 0.11 MTL [8] ResNet-12 ¶ 61.…”
Section: Methodsmentioning
confidence: 99%
“…Unfortunately, these methods choose a less expressive architecture as a feature extractor, like 4CONV [13], to avoid the representation deficiency in a high-dimensional parameter space. [29] tries to propose better base-learners and [18] interpolates self-attention relation network. These two extract features directly via more powerful architecture, ResNet-12 [22] or ResNet-101, but neither of them addresses the deficiency mentioned.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…SoSN [131] chose to perform the relation computation on the secondorder representation of feature maps. SARN [132] introduced a self-attention mechanism into Relation Net for capturing non-local features and enhancing representation.…”
Section: Relation Net and Its Variantsmentioning
confidence: 99%
“…Zhang et al [24] improve the relation network by using three-dimensional tensor features of different levels extracted by the embedded network. Hui et al [25] propose a self-attention relation network including an embedding module, an attention module, and a relation module. Besides, similarity measurement is also an important part of the meta-metric learning model, and a lot of work appeared around this aspect.…”
Section: Related Workmentioning
confidence: 99%