2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020
DOI: 10.1109/wacv45572.2020.9093277
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Level Representation Learning for Deep Subspace Clustering

Abstract: This paper proposes a novel deep subspace clustering approach which uses convolutional autoencoders to transform input images into new representations lying on a union of linear subspaces. The first contribution of our work is to insert multiple fully-connected linear layers between the encoder layers and their corresponding decoder layers to promote learning more favorable representations for subspace clustering. These connection layers facilitate the feature learning procedure by combining low-level and high… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(15 citation statements)
references
References 44 publications
(76 reference statements)
0
15
0
Order By: Relevance
“…In future work, we will extend our model to deep multi-view subspace clustering. We assume that there is consistent information among different views of multi-view data and that each view has view-specific information [21,24,45]. We impose a separation strategy on multi-view data, so that each view shares consistent structural information on the premise of maintaining its view-specific information.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In future work, we will extend our model to deep multi-view subspace clustering. We assume that there is consistent information among different views of multi-view data and that each view has view-specific information [21,24,45]. We impose a separation strategy on multi-view data, so that each view shares consistent structural information on the premise of maintaining its view-specific information.…”
Section: Discussionmentioning
confidence: 99%
“…According to Equation (2), the regularization term C κ means that the sum of k smallest eigenvalues of L C needs to be minimized, which is difficult to solve. However, according to the basic properties of the eigenvalues [20,45], the regularization term can be rewritten as:…”
Section: Model Optimizationmentioning
confidence: 99%
“…Our method differs significantly from [34] in that our proposed selfrepresentation model is designed to minimize the difference from the original data points by combining the self-expressive property of multiple subsets. Lastly, in this paper, we limit our discussion to the non-deep learning approaches that are more mathematically straightforward to explain and rely less on parameter tuning, and not to mention that deep learning approaches [35], [36] usually have high computational and memory complexity.…”
Section: Related Workmentioning
confidence: 99%
“…The existing subspace clustering methods can be divided into five categories: statistical methods [2], iterative methods [3], algebraic algorithms [4], deep learning-base methods [5], [6] and spectral clustering-based methods [7]- [9]. Statistical methods require prior knowledge of the number and dimensions of subspaces.…”
Section: Introductionmentioning
confidence: 99%