2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00305
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Pre-Training of Image Features on Non-Curated Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
523
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 397 publications
(526 citation statements)
references
References 28 publications
2
523
0
1
Order By: Relevance
“…However, when we do not have access to such ground-truth labels, we need to define a prior to obtain an estimate of which samples are likely to belong together, and which are not. End-to-end learning approaches have utilized the architecture of CNNs as a prior [53,6,51,16,4,5], or enforced consistency between images and their augmentations [23,20] to disentangle the clusters. In both cases, the cluster learning is known to be sensitive to the network initialization.…”
Section: Representation Learning For Semantic Clusteringmentioning
confidence: 99%
See 2 more Smart Citations
“…However, when we do not have access to such ground-truth labels, we need to define a prior to obtain an estimate of which samples are likely to belong together, and which are not. End-to-end learning approaches have utilized the architecture of CNNs as a prior [53,6,51,16,4,5], or enforced consistency between images and their augmentations [23,20] to disentangle the clusters. In both cases, the cluster learning is known to be sensitive to the network initialization.…”
Section: Representation Learning For Semantic Clusteringmentioning
confidence: 99%
“…DEC [51], DAC [6], DeepCluster [4], DeeperCluster [5], or others [1,16,53]) leverage the architecture of CNNs as a prior to cluster images. Starting from the initial feature representations, the clusters are iteratively refined by deriving the supervisory signal from the most confident samples [6,51], or through cluster re-assignments calculated offline [4,5]. A second group of methods (e.g.…”
Section: Introduction and Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…As discussed in Section 2.2, standard deep learning models cannot be directly used for unsupervised tasks, where we do not have the labels of images. Self-supervised learning of image features has been recently proposed in deep learning community (Yang et al, 2016;Gidaris et al, 2018;Caron et al, 2018Caron et al, , 2019. 8 The intuition behind self-supervised learning is that we can perform data augmentation and create pseudo-categories (Gidaris et al, 2018 We then apply a predetermined clustering algorithm such as k-means on the initial low-dimensional vectors to group them into K clusters.…”
Section: Self-supervised Learning Of Image Representationsmentioning
confidence: 99%
“…This learning process uses a computer to classify data randomly. This process is usually known as the concept of unsupervised learning [29] [30]. The input is data or objects and clusters.…”
Section: K-means Clusteringmentioning
confidence: 99%