2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489758
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing exchange confidence during collaborative clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…This is achieved using Karush Kuhn Tucker (KKT) optimization and the result gives an insight on the importance of diversity to achieve positive collaborations. This first contribution is an extension of two previous conference papers [14,15].…”
Section: Introductionmentioning
confidence: 76%
See 1 more Smart Citation
“…This is achieved using Karush Kuhn Tucker (KKT) optimization and the result gives an insight on the importance of diversity to achieve positive collaborations. This first contribution is an extension of two previous conference papers [14,15].…”
Section: Introductionmentioning
confidence: 76%
“…The two results in Equations (15) and 21are a continuity as both encourage to different degrees collaboration with algorithms that have similar partition, and thus a low diversity. When analyzing the expression of the parameter free β * ji from Equation 23, we can see that the interpretation is the same: the more similar the partitions, the stronger the collaboration weights.…”
Section: Results Interpretationmentioning
confidence: 99%
“…Given the use of low-resolution datasets in subspace clustering, employing larger convolutional kernels or deeper network often results in the loss of fine-grained details, while smaller kernels may hinder the extraction of deep-level information. Therefore, we set three-layer encoders with [10,20,30] channels correspondingly and adopt [4 × 4, 3 × 3, 4 × 4] kernel size to extract the representations of each view. We also employ the Mish activation function to ensure smoother gradients during the training process.…”
Section: Multi-view Convolutional Encodersmentioning
confidence: 99%
“…Recently, many multi-view clustering methods have been developed. Existing works about multiview subspace clustering can be divided into different categories, i.e., non-negative matrix factorization (NMF) framework [6][7][8][9], collaborative clustering methods [10][11][12], co-training methods [13][14][15], self-expressive based methods [16,17] and deep-learning-based methods [18,19]. NMF-based methods aim to obtain the partitioning of the data by a low-rank decomposition of the data matrix, and it is proved to be effective where the subspaces of data points are independent of each other.…”
Section: Introductionmentioning
confidence: 99%
“…The weighting method is presented in this paper and called Masked Weighting Method. The goal of this method is to (1) combine the information from different views, (2) reduce the weight of views with information which could hinder the cooperative reconstruction process [3], [4], and (3) reduce the impact of missing data during the unsupervised learning process [5].…”
Section: Introductionmentioning
confidence: 99%