2023
DOI: 10.1016/j.patcog.2023.109764
|View full text |Cite
|
Sign up to set email alerts
|

Auto-attention mechanism for multi-view deep embedding clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…This oversight often results in the neglect of the interrelations among different multiviews. In terms of processing multi-view data, current deep learning methods need to drive different neural networks independently for different viewpoints, resulting in low efficiency and high computational resource consumption, Diallo et al [29] proposed an innovative Multi-view Deep Embedded Clustering (MDEC) model that employs a triple fusion technique designed to reduce the errors incurred in learning the features of each view and correlating data from multiple views. None of the existing methods in attributed graph clustering realize that the nonlinearity between two consecutive GCN layers is unnecessary for improving the performance of attributed graph clustering, and may even impair the efficiency of the model.…”
Section: Combine the Attention Mechanismmentioning
confidence: 99%
“…This oversight often results in the neglect of the interrelations among different multiviews. In terms of processing multi-view data, current deep learning methods need to drive different neural networks independently for different viewpoints, resulting in low efficiency and high computational resource consumption, Diallo et al [29] proposed an innovative Multi-view Deep Embedded Clustering (MDEC) model that employs a triple fusion technique designed to reduce the errors incurred in learning the features of each view and correlating data from multiple views. None of the existing methods in attributed graph clustering realize that the nonlinearity between two consecutive GCN layers is unnecessary for improving the performance of attributed graph clustering, and may even impair the efficiency of the model.…”
Section: Combine the Attention Mechanismmentioning
confidence: 99%