2020
DOI: 10.48550/arxiv.2008.11174
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Obstacle Representations for Neural Motion Planning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Many neural motion-planning methods try to encode the workspace with deep neural networks. In [95], Strudel et al proposed to learn a function that encodes the obstacles and the goal configuration as a vector. A PointNetlike network is used to encode point cloud in their method.…”
Section: Motion Planning With Policy-based Rl Methodsmentioning
confidence: 99%
“…Many neural motion-planning methods try to encode the workspace with deep neural networks. In [95], Strudel et al proposed to learn a function that encodes the obstacles and the goal configuration as a vector. A PointNetlike network is used to encode point cloud in their method.…”
Section: Motion Planning With Policy-based Rl Methodsmentioning
confidence: 99%
“…By leveraging classical components and scaling up learning, we are able to learn models that generalize to novel objects. Learning for motion planning has been used to reduce the runtime of motion planning algorithms: [15], [16], [43]. Strudel et al [43] learn obstacles representations for motion planning, while Ichter et al [15], [16] use learning to bias sampling of states for motion planners.…”
Section: Related Workmentioning
confidence: 99%
“…Learning for motion planning has been used to reduce the runtime of motion planning algorithms: [15], [16], [43]. Strudel et al [43] learn obstacles representations for motion planning, while Ichter et al [15], [16] use learning to bias sampling of states for motion planners. Our use of learning is similarly motivated, but we learn to predict low-dimensional strategies (that can be decoded into full motion plans) for constrained motion planning problems from visual input.…”
Section: Related Workmentioning
confidence: 99%
“…These methods first encode the environmental point cloud data and then exploit neural networks to fit the expert’s demonstration trajectories, but these methods depend on the quality of the dataset and are subject to error accumulation when the neural networks are forward-propagated to generate samples. Reinforcement learning approaches treat the motion planning problem as a Markov process [ 25 ], where the intelligence learns the planning strategy through continuous trial and error, yet the manipulation skills learned by the robot in the simulation environment are difficult to deploy on real robots. Deep reinforcement learning-based planning algorithms have lots of model parameters and it is difficult to deploy on robotic arms.…”
Section: Introductionmentioning
confidence: 99%