Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2020
DOI: 10.1145/3394486.3403101
|View full text |Cite
|
Sign up to set email alerts
|

Residual Correlation in Graph Neural Network Regression

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
53
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(60 citation statements)
references
References 13 publications
1
53
0
Order By: Relevance
“…, ∀𝜀 > 0 (11) The conclusion in ( 11) could be theoretically guaranteed by the research findings in [41] . In…”
Section: Proof Of Stochastic Neuromorphic Completenessmentioning
confidence: 78%
See 1 more Smart Citation
“…, ∀𝜀 > 0 (11) The conclusion in ( 11) could be theoretically guaranteed by the research findings in [41] . In…”
Section: Proof Of Stochastic Neuromorphic Completenessmentioning
confidence: 78%
“…A variety of neuro networks (NNs) are designed among which deep learning models (DLMs) [1][2][3][4][5] have been successfully applied to engineering practices such as face recognition, voice recognition, medical data mining. Meanwhile, NNs have innovated the research methodology in many fields owing to their unique properties and outstanding capability in big data environments [6][7][8][9][10][11] . Although…”
Section: Introductionmentioning
confidence: 99%
“…We test four different auxiliary task weights λ = {0, 0.25.0.5, 0.75}, where λ = 0 implies no auxiliary task. Spatial graphs are constructed assuming k = 5 nearest neighbors, following results from previous work [4,16] and our own rigorous testing. We include a sensitivity analysis of the k parameter and different batch sizes in our results section.…”
Section: Methodsmentioning
confidence: 99%
“…Election 2 : This dataset contains the election results of over 3, 000 counties in the United States and was proposed by [16]. The regression task here is to predict election outcomes y using socio-demographic and economic features (e.g., median income, education) x and county locations c. [20] 0.0558 0.1874 0.0034 0.0249 0.0225 0.1175 0.0169 0.1029 PE-GCN λ = 0 0.0161 0.0868 0.0032 0.0241 0.0040 0.0432 0.0031 0.0396 PE-GCN λ = 0.25 0.0155 0.0882 0.0032 0.0236 0.0037 0.0417 0.0032 0.0416 PE-GCN λ = 0.5 0.0156 0.0885 0.0031 0.0241 0.0036 0.0401 0.0033 0.0421 PE-GCN λ = 0.75 0.0160 0.0907 0.0031 0.0240 0.0040 0.0429 0.0033 0.0424 GAT [30] 0.0558 0.1877 0.0034 0.0249 0.0226 0.1165 0.0178 0.0998 PE-GAT λ = 0 0.0159 0.0918 0.0032 0.0234 0.0039 0.0429 0.0060 0.0537 PE-GAT λ = 0.25 0.0161 0.0867 0.0032 0.0235 0.0040 0.0417 0.0058 0.0530 PE-GAT λ = 0.5 0.0162 0.0897 0.0032 0.0238 0.0045 0.0465 0.0061 0.0548 PE-GAT λ = 0.75 0.0162 0.0873 0.0032 0.0237 0.0041 0.0429 0.0062 0.0562 GraphSAGE [12] 0.0558 0.1874 0.0034 0.0249 0.0274 0.1326 0.0180 0.0998 PE-GraphSAGE λ = 0 0.0157 0.0896 0.0032 0.0237 0.0039 0.0428 0.0060 0.0534 PE-GraphSAGE λ = 0.25 0.0097 0.0664 0.0032 0.0242 0.0040 0.0418 0.0059 0.0534 PE-GraphSAGE λ = 0.5 0.0100 0.0682 0.0033 0.0239 0.0043 0.0461 0.0060 0.0536 PE-GraphSAGE λ = 0.75 0.0100 0.0661 0.0032 0.0241 0.0036 0.0399 0.0058 0.0541 KCN [4] 0.0292 0.1405 0.0367 0.1875 0.0143 0.0927 0.0081 0.0758 PE-KCN λ = 0 0.0288 0.1274 0.0598 0.2387 0.0648 0.2385 0.0025 0.0310 PE-KCN λ = 0.25 0.0324 0.1380 0.0172 0.1246 0.0059 0.0593 0.0037 0.0474 PE-KCN λ = 0.5 0.0237 0.1117 0.0072 0.0714 0.0077 0.0664 0.0077 0.0642 PE-KCN λ = 0.75 0.0260 0.1194 0.0063 0.0681 0.0122 0.0852 0.0110 0.0755 Approximate GP 0.0353 0.1382 0.0031 0.0348 0.0481 0.0498 0.0080 0.0657 Exact GP 0.0132 0.0736 0.0022 0.0253 0.0084 0.0458 --Table 1.…”
Section: Datamentioning
confidence: 99%
“…GraphSAGE A popular GNN model, GraphSAGE, (Hamilton et al 2017) is a general framework that leverages node feature information and learns node embeddings through aggregation from a node's local neighborhood. Unlike many other methods based on matrix factorization and normalization (Jia and Benson 2020), GraphSAGE simply aggregates the features from a local neighborhood, and is thus less computationally expensive. The features can be aggregated from a different number of hops or search depth.…”
Section: Incorporating Geographical Knowledgementioning
confidence: 99%