2020
DOI: 10.1007/978-3-030-58607-2_16
|View full text |Cite
|
Sign up to set email alerts
|

SCAN: Learning to Classify Images Without Labels

Abstract: Can we automatically group images into semantically meaningful clusters when ground-truth annotations are absent? The task of unsupervised image classification remains an important, and open challenge in computer vision. Several recent approaches have tried to tackle this problem in an end-to-end fashion. In this paper, we deviate from recent works, and advocate a two-step approach where feature learning and clustering are decoupled. First, a self-supervised task from representation learning is employed to obt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
340
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 310 publications
(343 citation statements)
references
References 34 publications
(68 reference statements)
3
340
0
Order By: Relevance
“…Some methods combine several unsupervised steps to achieve firstly a good representation and then a clustering (e.g. [41]). In most cases, this unsupervised training is achieved by generating its own labels, and therefore the methods are called self-supervised.…”
Section: A Training Strategiesmentioning
confidence: 99%
See 2 more Smart Citations
“…Some methods combine several unsupervised steps to achieve firstly a good representation and then a clustering (e.g. [41]). In most cases, this unsupervised training is achieved by generating its own labels, and therefore the methods are called self-supervised.…”
Section: A Training Strategiesmentioning
confidence: 99%
“…A method that uses different algorithms, losses, or datasets during the training but only uses unsupervised data X u has one stage (e.g. [41]). A method which uses X u and X l during the complete training has one stage (e.g.…”
Section: A Training Strategiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Since our main contribution unravels the connection between Gaussian mixture models and autoencoders, models based on associative (Haeusser et al, 2018;Yang et al, 2019), spectral (Tian et al, 2014;Yang et al, 2019;Bianchi et al, 2020) or subspace (Zhang et al, 2019;Miklautz et al, 2020) clusterings are outside the scope of this paper. Furthermore, unlike (Yang et al, 2016;Chang et al, 2017;Kampffmeyer et al, 2019;Ghasedi Dizaji et al, 2017;Caron et al, 2018;Ji et al, 2019;Van Gansbeke et al, 2020), our model falls within the category of general purpose deep embedded clustering models which builds upon GMM (Xie et al, 2016;Yang et al, 2017;Fard et al, 2020). We mention that recent works related to specific topics such as perturbation robustness and non-redundant subspace clustering also utilize ideas that are to some degree related to the general concept of deep embedded clustering (Yang et al, 2020;Miklautz et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…We show that the representations generated by MIX'EM result in high accuracy semantic clustering only by applying K-means to them. Existing works on unsupervised image classification using self-supervised representations [36,37,38] should benefit from adapting our proposed module, as it is an internal module that can be plugged-in without altering the output mechanism.…”
Section: Related Workmentioning
confidence: 99%