2013
DOI: 10.1109/tro.2013.2252252
|View full text |Cite
|
Sign up to set email alerts
|

Active Visual Planning for Mobile Robot Teams Using Hierarchical POMDPs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
31
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 23 publications
(31 citation statements)
references
References 15 publications
0
31
0
Order By: Relevance
“…For each scene, the scene processing (SP)-POMDP plans the processing of regions of images of the scene using available algorithms. This hierarchical decomposition supports automatic belief propagation between the levels of the hierarchy and automatic model creation at each level [28], [29]. Thus, ASP-based inference operates at the (abstract) level of rooms, and POMDPs plan at the higher resolution of cells.…”
Section: B Planning Under Uncertainty With Partially Observable Markmentioning
confidence: 99%
See 1 more Smart Citation
“…For each scene, the scene processing (SP)-POMDP plans the processing of regions of images of the scene using available algorithms. This hierarchical decomposition supports automatic belief propagation between the levels of the hierarchy and automatic model creation at each level [28], [29]. Thus, ASP-based inference operates at the (abstract) level of rooms, and POMDPs plan at the higher resolution of cells.…”
Section: B Planning Under Uncertainty With Partially Observable Markmentioning
confidence: 99%
“…This formulation can become computationally intractable for real-time operation because the number of grid cells can increase significantly in complex domains. Our previous work [28] addressed this challenge by enabling robots to learn a convolutional policy kernel from the policy for a small region, exploiting the rotation and shift invariance properties of visual search. This kernel is convolved with larger maps to efficiently generate appropriate policies.…”
Section: B Planning Under Uncertainty With Partially Observable Markmentioning
confidence: 99%
“…POMDPs form a general and powerful mathematical basis for planning under uncertainty, and their use in mobile robotic applications has increased [13,14,55,57,62]. POMDPs provide a comprehensive approach for modeling the interaction of an active sensor with its environment.…”
Section: A Decision-theoretic Approachmentioning
confidence: 99%
“…Work on active sensing using POMDPs includes collaboration between mobile robots to detect static objects [62] and combining a POMDP approach with information-theoretic heuristics to reduce uncertainty on goal position and probability of collision in robot navigation [12]. Finally, Spaan and Lima [52] consider objectives such as maximizing coverage or improving localization uncertainty when dynamically selecting a subset of image streams to be processed simultaneously.…”
Section: Active Sensingmentioning
confidence: 99%
“…The optimal action policy consists of the optimal action for any possible sequence of observations, such that the expected total reward is maximized under the POMDPs model of sensing and action uncertainty. POMDPs have been established as a tool to solve a variety of tasks in robot soccer [2], household robotics [3], coastal survey [4], and even nursing assistance [5].…”
mentioning
confidence: 99%