2020
DOI: 10.1007/978-3-030-61725-7_34
|View full text |Cite
|
Sign up to set email alerts
|

Physics-Driven Machine Learning for Time-Optimal Path Planning in Stochastic Dynamic Flows

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…Partially Observable Markov Decision Processes (POMDPs) have also been used for a variety of simultaneous localization and planning applications of ground or aerial vehicles where the focus is local closed-loop control [31], [32]. Building on top of the MDP framework, reinforcement learning (RL) algorithms [33], [34], [35], [36], [37], [38], [39] have also been used for path planning. GPU-accelerated RL libraries have also been developed recently [40], [41].…”
Section: A Previous Progress In Optimal Path Planningmentioning
confidence: 99%
See 3 more Smart Citations
“…Partially Observable Markov Decision Processes (POMDPs) have also been used for a variety of simultaneous localization and planning applications of ground or aerial vehicles where the focus is local closed-loop control [31], [32]. Building on top of the MDP framework, reinforcement learning (RL) algorithms [33], [34], [35], [36], [37], [38], [39] have also been used for path planning. GPU-accelerated RL libraries have also been developed recently [40], [41].…”
Section: A Previous Progress In Optimal Path Planningmentioning
confidence: 99%
“…Similar to the setup in [39] and [30], we discretize the domain to a spatio-temporal grid world that constitutes the state space (Fig 1(B)). A state s represents a square region in the spatio-temporal grid, which is indexed by a spatio-temporal coordinate [x s , t s ].…”
Section: A Mdp Formulation Of Path Planningmentioning
confidence: 99%
See 2 more Smart Citations
“…The challenge is primarily the computational cost (e.g., [7][8][9]). Existing Markov Decision Process (MDP) [4] or Reinforcement Learning [10,11] path planners are slow for realistic real-time applications. Path planners based on Dijkstra's algorithm [12], variants of A * [13,14] and Delayed D * [15] work well in deterministic settings, but their Monte Carlo versions are computationally inefficient.…”
Section: Introductionmentioning
confidence: 99%