2022
DOI: 10.1016/j.patcog.2022.108696
|View full text |Cite
|
Sign up to set email alerts
|

Causal GraphSAGE: A robust graph method for classification based on causal sampling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(4 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…Methods based on graph neural networks: Methods such as GCN [25,26], SGCN [27], GIN [14], and Causal GraphSAGE [28] aggregate information over neighboring nodes through convolution of multiple layers. These methods can capture the local structure of the nodes but have limitations in dealing with the global structure.…”
Section: Related Workmentioning
confidence: 99%
“…Methods based on graph neural networks: Methods such as GCN [25,26], SGCN [27], GIN [14], and Causal GraphSAGE [28] aggregate information over neighboring nodes through convolution of multiple layers. These methods can capture the local structure of the nodes but have limitations in dealing with the global structure.…”
Section: Related Workmentioning
confidence: 99%
“…This approach captures the local neighborhood structure of the graph and has been shown to perform well on various graph-based machine learning tasks. Other popular graph embedding methods include Node2Vec [22], which is an extension of DeepWalk that balances between the breadth-first and depth-first search strategies during random walk, and GraphSAGE [23], which learns embeddings by aggregating feature information from a node's local neighborhood using a neural network.…”
Section: Graph Embeddingmentioning
confidence: 99%
“…On the other hand, these models fail to consider inner relationships among features and thus suffer from the performance drop on the out-of-distribution data (Wu et al, 2022). In effect, causal inference endows the prediction with better model generalization and performance robustness (Zhang et al, 2022a).…”
Section: Introductionmentioning
confidence: 99%
“…The mechanism of GraphFwFM for feature graphGGNNs and other types of GNNs are prone to over-smoothing as the number of neural network layers increases(Zhou et al, 2020). As illustrated by previous research (e.g.,Little & Badawy, 2019;Guo et al, 2020;Zhang et al, 2022a), sampling based on causal relationships among features can increase the performance robustness of prediction models. In this research, we sample a node's neighbors in feature graph based on the causality-based weighted adjacency matrix learnt in Section 3.2.1.…”
mentioning
confidence: 99%