2020
DOI: 10.1007/978-3-030-58571-6_38
|View full text |Cite
|
Sign up to set email alerts
|

When Does Self-supervision Improve Few-Shot Learning?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
71
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 108 publications
(74 citation statements)
references
References 38 publications
3
71
0
Order By: Relevance
“…nearest neighbor [88]. Overall, the community seems to be reaching a consensus [15], [34], [41], [73], [90], [95]: the key ingredient to high-performing few-shot classification is learning a general representation, rather than sophisticated algorithms for adapting to the new classes. In line with these works, we study what representation is suitable for solving a target task.…”
Section: Related Workmentioning
confidence: 99%
“…nearest neighbor [88]. Overall, the community seems to be reaching a consensus [15], [34], [41], [73], [90], [95]: the key ingredient to high-performing few-shot classification is learning a general representation, rather than sophisticated algorithms for adapting to the new classes. In line with these works, we study what representation is suitable for solving a target task.…”
Section: Related Workmentioning
confidence: 99%
“…Most of the prior works [ 61 , 62 ] in computer vision weave self-supervision into few-shot learning by adding pretext tasks loss. Predicting the index of jigsaw puzzles and the angle of rotations are among the most effective pretext task choices.…”
Section: Methodsmentioning
confidence: 99%
“…They presented generative algorithms to generate new training examples, and proposed to learn better embeddings from more training examples with larger intra-class diversity. In [56], Su et al studied the efectiveness of utilizing self-supervised learning (SSL) techniques in few-shot setting. SSL utilizes the structure information contained already in an image to facilitate the learning of representations, having been well studied in traditional unsupervised learning with large unlabeled datasets [9,13,27].…”
Section: Towards Improving Feature Embeddingsmentioning
confidence: 99%