2021
DOI: 10.48550/arxiv.2101.11058
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Supervised Momentum Contrastive Learning for Few-Shot Classification

Abstract: Instance discrimination based contrastive learning has emerged as a leading approach for self-supervised learning of visual representations. Yet, its generalization to novel tasks remains elusive when compared to representations learned with supervision, especially in the few-shot setting. We demonstrate how one can incorporate supervision in the instance discrimination based contrastive self-supervised learning framework to learn representations that generalize better to novel tasks. We call our approach CIDS… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 32 publications
0
15
0
Order By: Relevance
“…In the image classification domain, DNNs are known to be highly capable of learning invariant representations, enabling the construction of good classifiers [64]. However, it has been argued that DNNs are actually too eager to learn invariances [64]. This is because they often learn only the features necessary to discriminate between classes but then fail to generalise well to new unseen classes in a supervised setting, in what is known as "supervision collapse" [64,65].…”
Section: Contrastive Learningmentioning
confidence: 99%
“…In the image classification domain, DNNs are known to be highly capable of learning invariant representations, enabling the construction of good classifiers [64]. However, it has been argued that DNNs are actually too eager to learn invariances [64]. This is because they often learn only the features necessary to discriminate between classes but then fail to generalise well to new unseen classes in a supervised setting, in what is known as "supervision collapse" [64,65].…”
Section: Contrastive Learningmentioning
confidence: 99%
“…For example, Khosla et al [52] directly used class labels to define similarity, where samples from the same class are positive and samples from different classes are negative samples. Majumder et al [53] devised few-shot learning with Instance discrimination based contrastive learning in a supervised setup. Inspired by the success of these methods, we first introduce contrastive learning mechanism to ZSD, and develop two contrastive learning subnets using high-level semantic information as additional supervision signals.…”
Section: Related Workmentioning
confidence: 99%
“…The effectiveness of the contrastive self-supervised learned embeddings is generally evaluated by using the pretext feature model as starting point for a downstream supervised task. However, more direct ways to incorporate supervision are currently attracting large attention [17,52] and show how view invariance and semantic knowledge can be combined to get the best of both worlds in challenging scenarios as novelty detection [41], crossdomain generalization [61] or few-shot classification [27]. Current research is investigating ways to improve negative sampling [6], or present analyses to better understand the relation between contrastive learning and mutual information [53].…”
Section: Related Workmentioning
confidence: 99%