2018
DOI: 10.1016/j.patcog.2018.05.019
|View full text |Cite
|
Sign up to set email alerts
|

Discriminatively boosted image clustering with fully convolutional auto-encoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
120
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 223 publications
(120 citation statements)
references
References 18 publications
0
120
0
Order By: Relevance
“…The first term can be viewed as a reconstruction term as it forces the inferred latent representation to recover its corresponding input and the second KL term can be considered as a regularization term to modulate the posterior of the learned representation to be Gaussian distribution. We used ReLu Instead of fully connected layers, a convolutional autoencoder (CAE) is equipped with convolutional layers in which each unit is connected to only local regions of the previous layer [22]. A convolutional layer consists of multiple filters (kernels) and each filter has a set of weights used to perform convolution operation that computes dot products between a filter and a local region [23].…”
Section: Deep Representation Learningmentioning
confidence: 99%
“…The first term can be viewed as a reconstruction term as it forces the inferred latent representation to recover its corresponding input and the second KL term can be considered as a regularization term to modulate the posterior of the learned representation to be Gaussian distribution. We used ReLu Instead of fully connected layers, a convolutional autoencoder (CAE) is equipped with convolutional layers in which each unit is connected to only local regions of the previous layer [22]. A convolutional layer consists of multiple filters (kernels) and each filter has a set of weights used to perform convolution operation that computes dot products between a filter and a local region [23].…”
Section: Deep Representation Learningmentioning
confidence: 99%
“…Some proposals [25,56,60] jointly train an autoencoder neural network with a clustering algorithm, and use the internal representation provided by the autoencoder, i.e., the encoder output, as features for clustering. A different training method is used in [17,31,54], where autoencoders are initially pre-trained, and then fine-tuned using the cluster assignment loss. Finally, other techniques [24,57] combine clustering with standard convolutional neural networks (CNNs) for representation learning of images.…”
Section: Related Workmentioning
confidence: 99%
“…[23] applies a variant of approximate K-means on features extracted with a pretrained AlexNet and [30] uses deep auto-encoders combined with ensemble clustering to generate feature representations suitable for clustering. As of today, a new family of methods consisting in learning jointly the clusters and the representation using alternating optimization has established a new baseline on the performance of image-set clustering algorithms ( [31,26,32]).…”
Section: Image-set Clusteringmentioning
confidence: 99%