Proceedings of the 28th International Conference on Advances in Geographic Information Systems 2020
DOI: 10.1145/3397536.3422257
|View full text |Cite
|
Sign up to set email alerts
|

Graph Convolutional Networks with Kalman Filtering for Traffic Prediction

Abstract: Traffic prediction is a challenging task due to the time-varying nature of traffic patterns and the complex spatial dependency of road networks. Adding to the challenge, there are a number of errors introduced in traffic sensor reporting, including bias and noise. However, most of the previous works treat the sensor observations as exact measures ignoring the effect of unknown noise. To model the spatial and temporal dependencies, existing studies combine graph neural networks (GNNs) with other deep learning t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(17 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…Most existing GCL methods [17,58] use standard Graph Neural Networks (GNNs) such as GCN [22] and GIN [54] as GNN encoder. However, prior research has indicated that GNNs have limited expressive power and encounter difficulties in capturing subgraph properties [11]. Can we engineer a more expressive graph encoder that can effectively capture subgraph information from the original graph?…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Most existing GCL methods [17,58] use standard Graph Neural Networks (GNNs) such as GCN [22] and GIN [54] as GNN encoder. However, prior research has indicated that GNNs have limited expressive power and encounter difficulties in capturing subgraph properties [11]. Can we engineer a more expressive graph encoder that can effectively capture subgraph information from the original graph?…”
Section: Introductionmentioning
confidence: 99%
“…Although augmented graphs maintain a higher degree of cohesive subgraphs, the risk of losing subgraph information during the graph representation learning process remains. Current studies, such as [11], have underscored that plain GNNs struggle to accurately capture subgraph properties. To address this, inspired by [9], we then propose an original-graph-oriented graph substructure network (O-GSN) to enhance GNNs' power to aware graph cohesive substructures efficiently when encoding graphs.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, some researchers have modelled the data on the nodes of a graph and forecasted based on the graph, using graph neural networks (GNNs) for forecasting. These models can capture topological information between stations to deal with graph-structured data for traffic forecasting [23,24]. DCRNN [25] regarded the traffic flow as a diffusion process on the graph and captures the spatial dependency with bidirectional random walks.…”
Section: Introductionmentioning
confidence: 99%
“…Given a set of nodes of interest, namely a queried node-set, SGRL models such as SEAL [54,57], GraIL [41], and SubGNN [1] first extract a subgraph around the queried node-set (termed query-induced subgraph), and then encode the extracted subgraph for prediction. Extensive works have shown that SGRL models are more robust [52] and more expressive [5,12]; while canonical graph neural networks (GNNs) including GCN [23] and GraphSAGE [14] generally fail to make accurate predictions, due to their limited expressive power [9,13,57], incapability of capturing intra-node distance information [28,39], and improper entanglement between receptive field size and model depth [18,51,52]. An example in Fig.…”
Section: Introductionmentioning
confidence: 99%