2016
DOI: 10.1016/j.patcog.2015.12.007
|View full text |Cite
|
Sign up to set email alerts
|

Weighted Multi-view Clustering with Feature Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
60
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 173 publications
(66 citation statements)
references
References 21 publications
1
60
0
Order By: Relevance
“…We compare the performance of our method RRMVFS with several related methods: Single, CAT, RFS [Nie, Huang, Cai et al (2011)], SSMVFS ], SMML ], DSML-FS [Gui, Rao, Sun et al (2014)], and WMCFS [Xu, Wang and Lai (2016)]. Single refers to using single-view features to find best performance for classification.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We compare the performance of our method RRMVFS with several related methods: Single, CAT, RFS [Nie, Huang, Cai et al (2011)], SSMVFS ], SMML ], DSML-FS [Gui, Rao, Sun et al (2014)], and WMCFS [Xu, Wang and Lai (2016)]. Single refers to using single-view features to find best performance for classification.…”
Section: Methodsmentioning
confidence: 99%
“…The exponential parameter ρ in WMCFS method is set to be 5 according to Xu et al [Xu, Wang and Lai (2016)]. The public available data sets, including images data set NUS-WIDE-OBJECT (NUS), handwritten numerals data set mfeat, and Internet pages data set Ads are employed in the experiments.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is known that feature weights in one view are in the interval [0,1], so the more influence a feature is, the greater its weight should be. Feature-weighted techniques had been used for multi-view k-means clustering algorithms, such as simultaneous weighting on views and features (SWVF) [12] and weighted multi-view clustering with feature selection (WMCFS) [14]. Although these feature-weighted clustering algorithms may improve the performance of k-means for multi-view data, they do not consider feature reduction.…”
Section: Introductionmentioning
confidence: 99%
“…Next, a consensus function combines the ensemble into a consolidated solution or consensus partition, which has greater overall accuracy [39,21,19,54,56,42,10,18]. Given the ill-posed nature of clustering, the accuracy is typically measured by comparing the final solution with a known reference partition, which is generally based on the class labels associated with the data set [20,50,30,41,46]. Although this reference partition might not be the only valid structure for the data, many studies have tried to determine how ensembles should be built, and which characteristics they should have to obtain high accuracy.…”
Section: Introductionmentioning
confidence: 99%