Proceedings of the 2020 5th International Conference on Big Data and Computing 2020
DOI: 10.1145/3404687.3404701
|View full text |Cite
|
Sign up to set email alerts
|

Learning Trajectory Routing with Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(42 citation statements)
references
References 5 publications
0
36
0
Order By: Relevance
“…In a recent work, DAG with NoTears [52], the authors converted the combinatorial optimization problem into a continuous one and provided an associated optimization algorithm to recover sparse DAGs. This eventually spawned research that provided improvements and also capture complex functional dependencies by introducing deep learning variants [51,50]. One such follow up work is by [53], which we found to be the close to our method in terms of function representation capacity, specifically the MLP version of the nonparametric DAG learning.…”
Section: Related Methodsmentioning
confidence: 73%
“…In a recent work, DAG with NoTears [52], the authors converted the combinatorial optimization problem into a continuous one and provided an associated optimization algorithm to recover sparse DAGs. This eventually spawned research that provided improvements and also capture complex functional dependencies by introducing deep learning variants [51,50]. One such follow up work is by [53], which we found to be the close to our method in terms of function representation capacity, specifically the MLP version of the nonparametric DAG learning.…”
Section: Related Methodsmentioning
confidence: 73%
“…This is not a realistic assumption in observational sciences. Yu et al (2019) and Xu and Xu (2021) propose methods using neural networks in the same special case where the full DAGs is identifiable, and they also consider only a fixed average graph density, and do not study the influence of sample size. Ke et al (2022) also address this special case using neural networks, and they furthermore train their model on a mixture of observational and experimental data.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, a new method called 'DAG with NOTEARS' [45] was introduced which converts the combinatorial optimization problem of DAG learning into a continuous one. This has led to the development of many follow up works, including some deep learning methods [43,44,46,24]. There are also many reviews of causal structure learning methods and structural equation models available, for instance the work in [16].…”
Section: Directed Graphsmentioning
confidence: 99%