2014
DOI: 10.1177/0278364914528255
|View full text |Cite
|
Sign up to set email alerts
|

Integrated perception and planning in the continuous space: A POMDP approach

Abstract: The partially observable Markov decision process (POMDP) provides a principled mathematical model for integrating perception and planning, a major challenge in robotics. While there are efficient algorithms for moderately large discrete POMDPs, continuous models are often more natural for robotic tasks, and currently there are no practical algorithms that handle continuous POMDPs at an interesting scale. This paper presents an algorithm for continuous-state, continuousobservation POMDPs. We provide experimenta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
83
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 101 publications
(95 citation statements)
references
References 16 publications
1
83
0
Order By: Relevance
“…Finite-state controllers explicitly represent infinite-horizon policies, but can also be used (as a possibly more concise representation) for finite horizon policies. They have been widely used in POMDPs 1 (e.g., see Kaelbling et al 1998, Hansen 1998, Meuleau et al 1999b, Poupart and Boutilier 2004, Poupart 2005, Toussaint et al 2006, 2008, Grześ et al 2013, Bai et al 2014 and Dec-POMDPs (e.g., Bernstein et al 2005, Amato et al 2007a, Bernstein et al 2009, Kumar and Zilberstein 2010b, Pajarinen and Peltonen 2011a, Kumar et al 2011, Pajarinen and Peltonen 2011b, Wu et al 2013.…”
Section: Policy Representationmentioning
confidence: 99%
“…Finite-state controllers explicitly represent infinite-horizon policies, but can also be used (as a possibly more concise representation) for finite horizon policies. They have been widely used in POMDPs 1 (e.g., see Kaelbling et al 1998, Hansen 1998, Meuleau et al 1999b, Poupart and Boutilier 2004, Poupart 2005, Toussaint et al 2006, 2008, Grześ et al 2013, Bai et al 2014 and Dec-POMDPs (e.g., Bernstein et al 2005, Amato et al 2007a, Bernstein et al 2009, Kumar and Zilberstein 2010b, Pajarinen and Peltonen 2011a, Kumar et al 2011, Pajarinen and Peltonen 2011b, Wu et al 2013.…”
Section: Policy Representationmentioning
confidence: 99%
“…For clarity of presentation, however, we will proceed with the entire joint state X k+l which contains X c k+l . 1 We thus define the generalized belief space (GBS) at the lth planning step as…”
Section: Approach Overviewmentioning
confidence: 99%
“…The approaches falling in these second category usually assumes that the robot moves in a known environment; a remarkable property of these techniques is that they approach optimality when increasing the runtime (which is exponential in the size of the problem). A recent example of infinite-horizon planning is the work [1], in which Bai et al apply a Monte Carlo sampling technique to update an initial policy, assuming maximum likelihood observations. Finally, receding horizon strategies compute a policy over the next L control actions, where L is a given horizon.…”
Section: Introductionmentioning
confidence: 99%
“…2) Environment with Beacons-We also consider a scenario where the car-like robot estimates its location using measurements from two beacons, b 1 6 shows the B-SELQR trajectory and associated beliefs along the trajectory. The computed control policies steer the robot to move in the vicinity of the two beacons in order to better localize the robot before proceeding to the goal.…”
Section: ) Light-dark Environment-mentioning
confidence: 99%