Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/572
|View full text |Cite
|
Sign up to set email alerts
|

Multi-View Multiple Clustering

Abstract: Multiple clustering aims at exploring alternative clusterings to organize the data into meaningful groups from different perspectives. Existing multiple clustering algorithms are designed for singleview data. We assume that the individuality and commonality of multi-view data can be leveraged to generate high-quality and diverse clusterings. To this end, we propose a novel multi-view multiple clustering (MVMC) algorithm. MVMC first adapts multi-view self-representation learning to explore the individuality enc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 35 publications
(40 citation statements)
references
References 4 publications
0
40
0
Order By: Relevance
“…Despite these progresses, the aforementioned active learning with MVML solutions still can not account for objects with various instances, which provide additional hints on how to select the bag for query and reduce the query cost on complex bags. Furthermore, these active learning solutions still do not make good use of shared and individual information of multiple data views, which are essential for effective multiview data mining [29], [30], [31], [32], [33]. In this paper, we study active learning in a more general setting, where each object is represented with different feature views and includes diverse instances per view.…”
Section: B Active Learning With Mvmlmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite these progresses, the aforementioned active learning with MVML solutions still can not account for objects with various instances, which provide additional hints on how to select the bag for query and reduce the query cost on complex bags. Furthermore, these active learning solutions still do not make good use of shared and individual information of multiple data views, which are essential for effective multiview data mining [29], [30], [31], [32], [33]. In this paper, we study active learning in a more general setting, where each object is represented with different feature views and includes diverse instances per view.…”
Section: B Active Learning With Mvmlmentioning
confidence: 99%
“…DB(i) is used to quantify the diversity of the i-th bag by capturing its instance distributions in different views. Previous multi-view self-representation learning based solutions directly use S and {P v } V v=1 for follow-up learning tasks [29], [18], [32]. Here, to reduce the data fidelity when selecting bag-label pairs, we take S = 1 V V v=1 X v S and P v = X v P v as the input features to leverage the commonality and individuality of bags across views for selecting bag-label pairs.…”
Section: Selecting Informative Bag-label Pair Across Viewsmentioning
confidence: 99%
“…To find multiple clusterings on multi-view data, (Yao et al 2019b) recently proposed a solution called multi-view multiple clustering (MVMC). MVMC extracts the individual and shared similarity matrices of multi-view data based on the adapted self-representation learning (Luo et al 2018), and then applies semi-nonnegative matrix factorization (Ding, Li, and Jordan 2010) on each combination of the individual and common similarity data matrices to generate alternative clusterings, where the quality is pursued by the commonality matrix and the diversity is obtained by the individuality matrix.…”
Section: Factorization Factorization and Diversity Controlmentioning
confidence: 99%
“…(ii) DMClusts introduces a balanced redundancy quantification term, which jointly considers the case that two samples are often nearby in the representational subspace per layer, and the reverse case that they are often faraway per layer, to comprehensively quantify the redundancy of multiple clusterings, whilst existing similar quantification overlooks the latter case. Extensive experiments on benchmark datasets show that DM-Clusts significantly outperforms other related competitive multiple clusterings solutions (Yao et al 2019b;Wang et al 2019;Yang and Zhang 2017;Ye et al 2016;Jain, Meka, and Dhillon 2008;Cui, Fern, and Dy 2007) and the deep matrix factorization (Trigeorgis et al 2017) in finding multiple clusterings with quality and diversity.…”
Section: Factorization Factorization and Diversity Controlmentioning
confidence: 99%
“…In the past decade, multi-view clustering has become a hot research topic in data mining and machine learning, due to the rapid emergence of a great deal of multi-view data from different areas (Xu, Tao, and Xu 2013;Xu, Wang, and Lai 2016;Chao, Sun, and Bi 2017;Tao et al 2018;Li et al 2018;Huang, Chao, and Wang 2019;Xing et al 2019;Wang et al 2019;Yao et al 2019;Huang, Wang, and Chao 2019b). In multi-view data, the same instance is represented by multiple views obtaining from multiple sources or different feature subsets (Ji et al 2019;Huang, Wang, and Chao 2019c).…”
Section: Introductionmentioning
confidence: 99%