2024
DOI: 10.1109/tnnls.2022.3149997
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing Heterogeneous Networks With Missing Attributes by Unsupervised Contrastive Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…(1) Focusing on a network, the link prediction accuracy rises and then falls as the L value increases. For example, for the DBLP dataset, the ACC value increases rapidly in the range [1,2]. Then, in the range [2,4], the ACC values stabilize.…”
Section: Parameter Sensitivitymentioning
confidence: 99%
See 2 more Smart Citations
“…(1) Focusing on a network, the link prediction accuracy rises and then falls as the L value increases. For example, for the DBLP dataset, the ACC value increases rapidly in the range [1,2]. Then, in the range [2,4], the ACC values stabilize.…”
Section: Parameter Sensitivitymentioning
confidence: 99%
“…This indicates that too small L values cannot fully capture the intra-type features of nodes in the network, and too large L leads to capturing imprecise intra-type features. Specifically, the relative stability range of the parameter L is [1,3] on Amazon, [2,4] on DBLP and Yelp. (2) Comparing different datasets, the window length of the Yelp and DBLP datasets should be larger than that of the Amazon dataset.…”
Section: Parameter Sensitivitymentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, clustering with attribute-missing data has attracted significant attention from researchers (Liu 2021;Jin et al 2021;He et al 2022;Cui et al 2022;Rossi et al 2022;Tu et al 2022;Yoo et al 2022;Xu et al 2022;Jin et al 2023). For example, SAT (Chen et al 2022) and CSAT (Li et al 2022b) perform distribution matching between the attribute embedding and the structure embedding to estimate missing attributes for clustering.…”
Section: Clustering With Attribute-missing Datamentioning
confidence: 99%
“…Zhu et al [22] addressed a contrastive representation method to enhance a reinforcement learning framework, which considered the correlation among consecutive inputs, and jointly trained the CNN encoder and Transformer through a contrastive learning process, in order to reconstruct features based on context frames. He et al [23] proposed a graph contrastive learning model, in which a contrastive learning scheme was developed to train the attribute completion and representation learning in an unsupervised heterogeneous framework, aiming to handle the missing attributes and jointly learn the embeddings of nodes and attributes.…”
Section: B Contrastive Learningmentioning
confidence: 99%