2015
DOI: 10.1016/j.neucom.2015.01.017
|View full text |Cite
|
Sign up to set email alerts
|

Multi-view clustering via pairwise sparse subspace representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 111 publications
(38 citation statements)
references
References 26 publications
0
38
0
Order By: Relevance
“…From Eq. (27) we see that each cluster may have different centers and that the points in each cluster can have different dispersions in different feature groups. We can change the input standard deviation matrix B to generate subspace clusters in feature groups.…”
Section: Synthetic Data Generationmentioning
confidence: 85%
See 1 more Smart Citation
“…From Eq. (27) we see that each cluster may have different centers and that the points in each cluster can have different dispersions in different feature groups. We can change the input standard deviation matrix B to generate subspace clusters in feature groups.…”
Section: Synthetic Data Generationmentioning
confidence: 85%
“…It is also worth to point out that FG-k-means and AFG-k-means are related to multi-view clustering [26,27]. The feature groups are similar to the views in multi-view clustering.…”
Section: Discussionmentioning
confidence: 98%
“…Recently, Shao et al [37] proposed a online multi-view clustering method with incomplete view via imposing lasso regularization on the representation of each view. More references can be referred to [16,33,40,50]. These methods are useful for the nonnegative multiview data analysis, however, they are not suitable for the noisy views and incomplete views, which are often encountered in real applications.…”
Section: Multi-viewmentioning
confidence: 99%
“…To address this problem, subspace learning based methods [23,27] are the most studied approaches. Classical subspace learning algorithms such as the Canonical Correlation Analysis (CCA) [10] and the Partial Least Squares (PLS) [19] have been adopted for learning a common representation for heterogeneous modalities.…”
Section: Introductionmentioning
confidence: 99%