2019
DOI: 10.1109/tip.2019.2916740
|View full text |Cite
|
Sign up to set email alerts
|

Essential Tensor Learning for Multi-View Spectral Clustering

Abstract: Multi-view clustering attracts much attention recently, which aims to take advantage of multi-view information to improve the performance of clustering. However, most recent work mainly focus on self-representation based subspace clustering, which is of high computation complexity. In this paper, we focus on the Markov chain based spectral clustering method and propose a novel essential tensor learning method to explore the high order correlations for multi-view representation. We first construct a tensor base… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
85
0
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 258 publications
(110 citation statements)
references
References 44 publications
1
85
0
2
Order By: Relevance
“…Motivated by them, to reduce the effects of noise, the low‐rank constraint is chosen in our method. On the basis of tensor, our work is close to studies 31,35,36 . However, the tensor nuclear norm used in our model is different from theirs.…”
Section: Related Workmentioning
confidence: 69%
See 1 more Smart Citation
“…Motivated by them, to reduce the effects of noise, the low‐rank constraint is chosen in our method. On the basis of tensor, our work is close to studies 31,35,36 . However, the tensor nuclear norm used in our model is different from theirs.…”
Section: Related Workmentioning
confidence: 69%
“…By stacking the subspace representation matrices of different views into a tensor and then rotating it, Xie et al 35 refined the view‐specific subspaces and explored the high‐order correlations of multiview data. On the basis of multiview transition probability matrices of the Markov chain, Wu et al 36 proposed an essential tensor learning method for multiview clustering. Zhang et al 37 explored high‐order correlations of multiview data by regarding subspace representation matrices as a low‐rank tensor.…”
Section: Related Workmentioning
confidence: 99%
“…To handle this disadvantage, motivated by t-SVD which is an effective convex relaxation, Xie et al (Yuan et al 2018) proposed t-SVD based multiview subspace clustering (t-SVD-MSC). Wu et al (Wu, Lin, and Zha 2019) employed transition probability matrices corresponding to different views as the tensor input and developed a new method for multi-view clustering (ETLMSC) which is an extension of robust multi-view spectral clustering (RMSC) (Xia et al 2014). Although the aforementioned tensor nuclear norm based multi-view subspace methods have achieved impressive results, all of them leverage the soft-thresholding function to shrink each singular values with the same parameter.…”
Section: Multi-view Subspace Clusteringmentioning
confidence: 99%
“…Inspired by the impressive results of LRR for subspace clustering, Zhang et al (Zhang et al 2015) adaptively learned graph, which is shared by different views, by minimizing the nuclear norm of tensorunfolding matrices that are constructed by affinity matrices, and developed low-rank tensor constrained multi-view subspace clustering (LT-MSC) method. Compared with the aforementioned nuclear norm, tensor nuclear norm based on t-SVD has been proven to be an effective convex relaxation of 1 -norm (Zhang et al 2014) and achieved impressive performance for image denoising, video completion, and multiview subspace clustering (Lu et al 2016;Yuan et al 2018;Wu, Lin, and Zha 2019;Hu et al 2017).…”
Section: Introductionmentioning
confidence: 99%
“…The constrained optimization problem of a relaxation of RMSC can be written as: where 1 denotes a column vector with all elements as 1, the l 1 -norm regularization term encourages the sparsity in E ( i ) , λ is a non-negative tradeoff parameter, P ( i ) is a graph for the i -th view, is a low-rank shared graph matrix and E ( i ) is the error matrix for the i -th view. The constraints are employed to make has a desired probability property, i.e., the optimal can be considered as a probability that the i -th and j -th samples are connected as a neighboring nodes in the graph [46, 47].…”
Section: Related Workmentioning
confidence: 99%