2020
DOI: 10.1016/j.patcog.2019.107107
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised representation learning by discovering reliable image relations

Abstract: Learning robust representations that allow to reliably establish relations between images is of paramount importance for virtually all of computer vision. Annotating the quadratic number of pairwise relations between training images is simply not feasible, while unsupervised inference is prone to noise, thus leaving the vast majority of these relations to be unreliable. To nevertheless find those relations which can be reliably utilized for learning, we follow a divide-and-conquer strategy: We find reliable si… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 29 publications
0
5
0
Order By: Relevance
“…DML has become essential for many applications, especially in zero-shot image and video retrieval [64,75,55,24,1]. Proposed approaches most commonly rely on a surrogate ranking task over tuples during training [65], ranging from simple pairs [17] and triplets [60,40] to higher-order quadruplets [5] and more generic n-tuples [64,46,22,73]. These ranking tasks can also leverage additional context such as geometrical embedding structures [72,8].…”
Section: Related Workmentioning
confidence: 99%
“…DML has become essential for many applications, especially in zero-shot image and video retrieval [64,75,55,24,1]. Proposed approaches most commonly rely on a surrogate ranking task over tuples during training [65], ranging from simple pairs [17] and triplets [60,40] to higher-order quadruplets [5] and more generic n-tuples [64,46,22,73]. These ranking tasks can also leverage additional context such as geometrical embedding structures [72,8].…”
Section: Related Workmentioning
confidence: 99%
“…Alternatively, we would like to implement the smooth losses to train a CNN or a BERT model for multilabel tasks from scratch (c.f., and (Lin et al, 2017)). If training from scratch, it might then be interesting to combine the proposed loss functions with representation learning (Milbich et al, 2020; or selfsupervised learning, in order to model abstract relationships between the labels.…”
Section: Future Workmentioning
confidence: 99%
“…In recent years, many unsupervised representation learning methods have been introduced (Misra et al, 2016;Gidaris et al, 2018;Rao et al, 2019;Milbich et al, 2020). The main idea of these methods is to explore easily accessible information, such as temporal or spatial neighbourhood, to design a surrogate supervisory signal to empower the feature learning.…”
Section: Unsupervised Representation Learningmentioning
confidence: 99%
“…These powerful representations can be independent of the downstream task, leaving the need of labels in the background. In fact, there are fully unsupervised representation learning methods (Misra et al, 2016;Gidaris et al, 2018;Rao et al, 2019;Milbich et al, 2020) that automatically extract expressive feature representations from data without any manually labelled annotation. Due to this intrinsic capability, representation learning based on deep neural networks have been becoming a widely used technique to empower other tasks (Caron et al, 2018;Oord et al, 2018;Chen et al, 2020).…”
Section: Introductionmentioning
confidence: 99%