2021
DOI: 10.48550/arxiv.2112.14971
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Contrastive Fine-grained Class Clustering via Generative Adversarial Networks

Abstract: Unsupervised fine-grained class clustering is practical yet challenging task due to the difficulty of feature representations learning of subtle object details. We introduce C3-GAN, a method that leverages the categorical inference power of InfoGAN by applying contrastive learning. We aim to learn feature representations that encourage the data to form distinct cluster boundaries in the embedding space, while also maximizing the mutual information between the latent code and its observation. Our approach is to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…For a complete picture of the field, readers may refer to the survey by Min et al (2018). We emphasize deep-clustering-based approaches, which attempt to learn the feature representation of the data while simultaneously discovering the underlying clusters: K-means Caron et al, 2018), information maximization (Menapace et al, 2020;Ji et al, 2019;Kim and Ha, 2021;Do et al, 2021), transport alignment (Asano et al, 2019;Caron et al, 2020;Wang et al, 2022), neighborhood-clustering (Xie et al, 2016;Huang et al, 2019;Dang et al, 2021), contrastive learning (Pan and Kang, 2021;Shen et al, 2021), probabilistic approaches Monnier et al, 2020;Falck et al, 2021;Manduchi et al, 2021), and kernel density (Yang and Li, 2021). These works primarily focus on clustering data for downstream tasks for a single domain, whereas our clustering algorithm is designed to cluster the data from multiple domains.…”
Section: Related Workmentioning
confidence: 99%
“…For a complete picture of the field, readers may refer to the survey by Min et al (2018). We emphasize deep-clustering-based approaches, which attempt to learn the feature representation of the data while simultaneously discovering the underlying clusters: K-means Caron et al, 2018), information maximization (Menapace et al, 2020;Ji et al, 2019;Kim and Ha, 2021;Do et al, 2021), transport alignment (Asano et al, 2019;Caron et al, 2020;Wang et al, 2022), neighborhood-clustering (Xie et al, 2016;Huang et al, 2019;Dang et al, 2021), contrastive learning (Pan and Kang, 2021;Shen et al, 2021), probabilistic approaches Monnier et al, 2020;Falck et al, 2021;Manduchi et al, 2021), and kernel density (Yang and Li, 2021). These works primarily focus on clustering data for downstream tasks for a single domain, whereas our clustering algorithm is designed to cluster the data from multiple domains.…”
Section: Related Workmentioning
confidence: 99%
“…SSL can be especially useful for tasks requiring heavy annotation costs, such as finegrained image recognition [7], because it aims to learn discriminative representations without using human annotation. However, there are several limitations to the application of current SSL methods for fine-grained image recognition tasks [8][9][10]. In contrast with ordinary computer vision tasks, fine-grained images share many visual characteristics across their classes.…”
Section: Introductionmentioning
confidence: 99%
“…While the explicit function of these networks is to generate artificial examples of some set of input data, training these networks aims to learn the statistical distribution of the input data in a multidimensional parameter space (Creswell et al 2018). As a result, GANs can be useful for any task where identifying this distribution is advantageous and can be exploited, such as data augmentation (Bousmalis et al 2017), classification (Radford et al 2015), clustering (Kim and Ha 2021), or the transfer of properties from one dataset onto another (i.e. style transfer; Karras et al 2018).…”
Section: Introductionmentioning
confidence: 99%