2019
DOI: 10.48550/arxiv.1904.11102
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Path Planning: Fixed Time, Near-Optimal Path Generation via Oracle Imitation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…To determine end-to-end collision-free trajectories in an iterative manner, algorithms in the works of [12][13][14][15] have been raised. Bency et al [12] introduced a Recurrent Neural Networks (RNNs)-based motion-planning algorithm called OracleNet that can determine end-to-end collision-free trajectories for static environments and generate near-optimal paths iteratively. In Ref.…”
Section: End-to-end Algorithmsmentioning
confidence: 99%
“…To determine end-to-end collision-free trajectories in an iterative manner, algorithms in the works of [12][13][14][15] have been raised. Bency et al [12] introduced a Recurrent Neural Networks (RNNs)-based motion-planning algorithm called OracleNet that can determine end-to-end collision-free trajectories for static environments and generate near-optimal paths iteratively. In Ref.…”
Section: End-to-end Algorithmsmentioning
confidence: 99%
“…A similar work is done in [34]. Bency et al present OracleNet, a recurrent neural network (RNN)-based approach to generate fast near-optimal paths for robotic arms [35]. OracleNet needs training on each new environment which makes the algorithm suitable for static environments.…”
Section: Related Workmentioning
confidence: 99%
“…The recent version of MPNet accounts for the kinematics constraint as well [17], [4]. Various architectures and machine learning methods are being used in planning including: recurrent neural networks (RNN) in OracleNet [23], 3D supervised imitation learning in (TDPP-Net) [24], unsupervised Generative Adversarial Networks (GANs) in [25] and [26], and reinforcement learning (RL) strategies in value iteration networks (VIN) [2], gated path planning networks (GPPN) [3], universal planning networks (UPN) [27] [28], guided policy search (GPS) [28], and learning-from demonstration (LfD) [29]. With the continual advancement in machine and deep learning techniques and hardware capabilities, increased development of new learning based path planning algorithms can be foreseen.…”
Section: A Classical and Learned Planning Algorithmsmentioning
confidence: 99%