2015
DOI: 10.1109/tro.2015.2441511
|View full text |Cite
|
Sign up to set email alerts
|

Extending the Applicability of POMDP Solutions to Robotic Tasks

Abstract: Abstract-Partially-Observable Markov Decision Processes (POMDPs) are used in many robotic task classes from soccer to household chores. Determining an approximately optimal action policy for POMDPs is PSPACE-complete, and the exponential growth of computation time prohibits solving large tasks. This paper describes two techniques to extend the range of robotic tasks that can be solved using a POMDP. Our first technique reduces the motion constraints of a robot, and then uses state-of-the-art robotic motion pla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 29 publications
0
12
0
Order By: Relevance
“…The size of input POMDP is decreased leading to an enhanced runtime for POMDP solver. Output policy of the solver is re-evaluated to make sure it works for every input problem [13].…”
Section: Related Work and Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…The size of input POMDP is decreased leading to an enhanced runtime for POMDP solver. Output policy of the solver is re-evaluated to make sure it works for every input problem [13].…”
Section: Related Work and Motivationmentioning
confidence: 99%
“…All of the aforementioned techniques are related to multi-agent systems but some do not present a viable method for agents to learn during exploration [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17], also agents which do learning themselves cannot detect errors on their own and hence those systems are not fault tolerant [18][19][20][21][22][23][24]. For exploration, agents must also be equipped with making decisions in less time, so there should be limited computation time for each agent.…”
mentioning
confidence: 99%
“…MDPs and partially-observable Markov decision processes (POMDPs) have been used extensively in the context of robotics (Grady et al, 2015). Winterer et al (2017) used POMDPs for motion planning and Chatterjee et al (2015) used POMDPs for qualitative analysis; however, to the best of the authors’ knowledge, the sensor calibration problem has not been cast as a MDP or POMDP.…”
Section: Related Workmentioning
confidence: 99%
“…From trials and errors, the agent will build a predictive model. Traditionnaly, the POMDP framework is used in robotic tasks [15]. Processing its sensors' signals, the robot builds a representation of the environment from which it can take actions.…”
Section: ) the Pomdp Theorymentioning
confidence: 99%