2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00419
|View full text |Cite
|
Sign up to set email alerts
|

Deep Spectral Clustering Using Dual Autoencoder Network

Abstract: The clustering methods have recently absorbed evenincreasing attention in learning and vision. Deep clustering combines embedding and clustering together to obtain optimal embedding subspace for clustering, which can be more effective compared with conventional clustering methods. In this paper, we propose a joint learning framework for discriminative embedding and spectral clustering. We first devise a dual autoencoder network, which enforces the reconstruction constraint for the latent representations and th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
99
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 231 publications
(112 citation statements)
references
References 38 publications
(40 reference statements)
0
99
0
Order By: Relevance
“…Such is the case for the confusion generated amongst algorithms caused by sedimentary onlaps, causing fossiliferous levels to lie closer to each other, at both Batallones-3 and Batallones-10. This may, however, be solved by more complex AIAs and the use of Deep MT systems, such as clustering AIAs using auto-encoders (Xie, Girshick & Farhadi, 2016;Guo et al, 2017;Mrabah et al, 2019;Yang et al, 2019), or those used for reinforcement learning tasks (Lake et al, 2014;Mnih et al, 2015;Holzinger, 2016;Simard et al, 2017). Efforts should therefore be made to investigate the effects of these numerous geological components on pattern detection algorithms.…”
Section: Discussionmentioning
confidence: 99%
“…Such is the case for the confusion generated amongst algorithms caused by sedimentary onlaps, causing fossiliferous levels to lie closer to each other, at both Batallones-3 and Batallones-10. This may, however, be solved by more complex AIAs and the use of Deep MT systems, such as clustering AIAs using auto-encoders (Xie, Girshick & Farhadi, 2016;Guo et al, 2017;Mrabah et al, 2019;Yang et al, 2019), or those used for reinforcement learning tasks (Lake et al, 2014;Mnih et al, 2015;Holzinger, 2016;Simard et al, 2017). Efforts should therefore be made to investigate the effects of these numerous geological components on pattern detection algorithms.…”
Section: Discussionmentioning
confidence: 99%
“…where ℓ clu is a clustering loss function, within which ϕ is the feature learner parameterized by Θ, f is a clustering assignment function parameterized by W and y x represents pseudo class labels yielded by the clustering; ℓ aux is a non-clustering loss function used to enforce additional constrains on the learned representations; and α and β are two hyperparameters to control the importance of the two losses. ℓ clu can be instantiated with a k-means loss [25,163], a spectral clustering loss [152,167], an agglomerative clustering loss [166], or a GMM loss [37], enabling the representation learning for the specific targeted clustering algorithm. ℓ aux is often instantiated with an autoencoder-based reconstruction loss [48,167] to learn robust and/or local structure preserved representations, or to prevent collapsing clusters.…”
Section: 23mentioning
confidence: 99%
“…ℓ clu can be instantiated with a k-means loss [25,163], a spectral clustering loss [152,167], an agglomerative clustering loss [166], or a GMM loss [37], enabling the representation learning for the specific targeted clustering algorithm. ℓ aux is often instantiated with an autoencoder-based reconstruction loss [48,167] to learn robust and/or local structure preserved representations, or to prevent collapsing clusters. After the deep clustering, the cluster assignments in the resulting f function can then be utilized to compute anomaly scores based on [60,67,68,142].…”
Section: 23mentioning
confidence: 99%
“…In most cases, it achieves optimal parameter values, which directly influence any subsequent activity. If the capacity of the autoencoder, specifically of the encoder and decoder, is too large compared to that of the latent space, the network could return a good output, but not be able to extract any information from the data [211]. That is, he would have learned to copy the input and return it to the output without learning any features of the data.…”
Section: Autoencoders Algorithms In Time Series Data Processingmentioning
confidence: 99%