2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8460730
|View full text |Cite
|
Sign up to set email alerts
|

Learning Sampling Distributions for Robot Motion Planning

Abstract: A defining feature of sampling-based motion planning is the reliance on an implicit representation of the state space, which is enabled by a set of probing samples. Traditionally, these samples are drawn either probabilistically or deterministically to uniformly cover the state space. Yet, the motion of many robotic systems is often restricted to "small" regions of the state space, due to, for example, differential constraints or collision-avoidance constraints. To accelerate the planning process, it is thus d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
206
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 269 publications
(207 citation statements)
references
References 39 publications
1
206
0
Order By: Relevance
“…The algorithm is used to plan for actions that move user-specified objects in the environment to user-defined locations. [10] presents a methodology for nonuniform sampling to accelerate samplingbased motion planning. In [11], a discrete RRT algorithm for path planning is shown.…”
Section: A Motion Planningmentioning
confidence: 99%
“…The algorithm is used to plan for actions that move user-specified objects in the environment to user-defined locations. [10] presents a methodology for nonuniform sampling to accelerate samplingbased motion planning. In [11], a discrete RRT algorithm for path planning is shown.…”
Section: A Motion Planningmentioning
confidence: 99%
“…In previous work, data-driven motion planning has often focused on learning search heuristics or policies for the motion planner rather than learning the underlying structure of the planner itself. Ichter et al developed a method for learning a sampling distribution for RRT* motion planning [8]. Imitation learning can also be used to learn a search heuristic based on previously planned optimal paths [9], [10].…”
Section: B Related Workmentioning
confidence: 99%
“…Related work by Paxton et al integrate a learned highlevel options policy in combination with a low-level control policy into the MCTS to improve the overall quality for a single agent planning problem. [16] Other approaches to accelerate sampling based motion planning task have been proposed, using conditional variational autoencoder, generating subspaces for sampling distributions over desired states [18]. Similarly Banzhaf et al learn a sampling distribution for poses of an RRT* path planning algorithm in semi-structured environments.…”
Section: B Learned Heuristicsmentioning
confidence: 99%