2022
DOI: 10.1007/s00521-021-06723-y
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical attention network for attributed community detection of joint representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…GAT: It is based on masked self-attentional layers. It avoids expensive matrix operations due to nodes in a neighborhood having varying weights, which can contribute to surrounding node feature extraction (Veličković et al , 2018; Zhao et al , 2022). Equation 6 indicates the significance between two nodes.GUCD: The encoder incorporate MRFasGCN method in the autoencoder framework that utilized the network and node semantics at the same time.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…GAT: It is based on masked self-attentional layers. It avoids expensive matrix operations due to nodes in a neighborhood having varying weights, which can contribute to surrounding node feature extraction (Veličković et al , 2018; Zhao et al , 2022). Equation 6 indicates the significance between two nodes.GUCD: The encoder incorporate MRFasGCN method in the autoencoder framework that utilized the network and node semantics at the same time.…”
Section: Methodsmentioning
confidence: 99%
“…The goal is to figure out each node's underlying representation by looking at its neighbors. In the GCN layer's fundamental aggregation function, the attention coefficients in the graph attention network (GAT) layer allow each node to be given a unique weight (Veličković et al, 2018;Zhao et al, 2022). Detecting overlapping communities in complex networks is another significant aspect of community detection.…”
Section: Deep Learning-based Community Detectionmentioning
confidence: 99%
“…However, Self-training may introduce noise while expanding the dataset. However, Selfsupervised learning may have an advancing effect on solving the problem of unstable performance of GNNs in settings with too few labeled nodes [38][39][40].…”
Section: Graphsage Uses a Multi-layer Aggregation Functionmentioning
confidence: 99%