2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01558
|View full text |Cite
|
Sign up to set email alerts
|

Multi-level Feature Learning for Contrastive Multi-view Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
48
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 95 publications
(54 citation statements)
references
References 33 publications
0
48
0
Order By: Relevance
“…In addition, we collect nine state-of-the-art multi-view clustering methods and compare them with the proposed IMVC. 1) AMGL [38] proposed a parameter-free view weights learning scheme to distinguish the importance of various views; 2) CSMSC [39] explored the consistent subspace representation and viewspecific representations; 3) GMC [40] learned a global graph matrix with a certain number of connected components; 4) CGL [41] recovered a low-rank tensor space, wherein the spectral embedding of each view are optimized; 5) EOMSC-CA [42] proposed a unified framework with simultaneously performing anchor selection and graph construction; 6) MvD-SCN [43] utilized the uniformity and diversity networks to learn the consistent and view-specific representations; 7) DSRL [44] developed an deep sparse regularizer learning method for multi-view clustering; 8) DMSC-UDL [45] mined the consensus subspace representation combining the diverse information of different views; 9) MFLVC [46] used the data features under different views as the contrastive entities and performed the contrastive learning to reinforce the sample discrimination.…”
Section: B Baselines and Evaluation Metrics 1) Baseline Methodsmentioning
confidence: 99%
“…In addition, we collect nine state-of-the-art multi-view clustering methods and compare them with the proposed IMVC. 1) AMGL [38] proposed a parameter-free view weights learning scheme to distinguish the importance of various views; 2) CSMSC [39] explored the consistent subspace representation and viewspecific representations; 3) GMC [40] learned a global graph matrix with a certain number of connected components; 4) CGL [41] recovered a low-rank tensor space, wherein the spectral embedding of each view are optimized; 5) EOMSC-CA [42] proposed a unified framework with simultaneously performing anchor selection and graph construction; 6) MvD-SCN [43] utilized the uniformity and diversity networks to learn the consistent and view-specific representations; 7) DSRL [44] developed an deep sparse regularizer learning method for multi-view clustering; 8) DMSC-UDL [45] mined the consensus subspace representation combining the diverse information of different views; 9) MFLVC [46] used the data features under different views as the contrastive entities and performed the contrastive learning to reinforce the sample discrimination.…”
Section: B Baselines and Evaluation Metrics 1) Baseline Methodsmentioning
confidence: 99%
“…Especially in the field of computer vision, contrastive learning methods have produced excellent results [23]. As an example, methods such as SimClR or MoCo [17, 22, 24] minimise the InfoNCE loss function [25] to maximise the upper bound of mutual information. Because dealing with negative samples is inconvenient, later contrastive learning algorithms [26, 27] have successfully replaced the contrast task with the prediction task without the need for negative samples.…”
Section: Related Workmentioning
confidence: 99%
“…Almost all existing contrastive learning methods [17, 22, 24, 26, 27] are designed to handle single‐view data, exhaustively exploring various data augmentations to build different views/augments. To the best of our knowledge, this is the first time double contrastive learning has been used on the IMC problem to handle the challenge of mutual promotion of consistency learning and data recovery from a different perspective.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These would cause suboptimal performance since they may learn meaningless modality-private noise information of each modality, introducing additional noises to domain clustering. Yet, this can be avoided by applying cross-modal contrastive learning, which is able to learn the common semantics across all modalities while reducing their meaningless modality-private noise information [33].…”
Section: Introductionmentioning
confidence: 99%