2022 International Conference on Robotics and Automation (ICRA) 2022
DOI: 10.1109/icra46639.2022.9811656
|View full text |Cite
|
Sign up to set email alerts
|

Stein Variational Probabilistic Roadmaps

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…We can integrate value functions learned by reinforcement learning (Haarnoja et al, 2018b) or optimal control (Lutter et al, 2021). Alternatively, we could compute the distribution for an optimal trajectory distribution by particles as in Stein variational MPC (Lambert et al, 2021) and exploit this multi-modal distribution as a guiding policy. These learned models can be afterward integrated with additional energy policies to deal with specific parts that were not covered in the RL or optimal control problem.…”
Section: Q-function In Optimal Control and Reinforcement Learningmentioning
confidence: 99%
“…We can integrate value functions learned by reinforcement learning (Haarnoja et al, 2018b) or optimal control (Lutter et al, 2021). Alternatively, we could compute the distribution for an optimal trajectory distribution by particles as in Stein variational MPC (Lambert et al, 2021) and exploit this multi-modal distribution as a guiding policy. These learned models can be afterward integrated with additional energy policies to deal with specific parts that were not covered in the RL or optimal control problem.…”
Section: Q-function In Optimal Control and Reinforcement Learningmentioning
confidence: 99%
“…SVGD has proven useful in a number of robotic applications in recent years, including control, planning, and point cloud matching [31]- [34]. SVGD has been applied to graphical models to approximate joint distributions using kernels over local node neighborhoods [35] and conditional distributions over nodes [36].…”
Section: Stein Variational Inferencementioning
confidence: 99%
“…At each timestep, we execute the first action in the trajectory and rerun the optimization, as in model predictive control (MPC). This approach is akin to a multi-robot version of Stein MPC [31].…”
Section: A Problem Formulationmentioning
confidence: 99%