2009
DOI: 10.1007/s10626-009-0071-x
|View full text |Cite
|
Sign up to set email alerts
|

Partially Observable Markov Decision Process Approximations for Adaptive Sensing

Abstract: Adaptive sensing involves actively managing sensor resources to achieve a sensing task, such as object detection, classification, and tracking, and represents a promising direction for new applications of discrete event system methods. We describe an approach to adaptive sensing based on approximately solving a partially observable Markov decision process (POMDP) formulation of the problem. Such approximations are necessary because of the very large state space involved in practical adaptive sensing problems, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 80 publications
(60 citation statements)
references
References 40 publications
0
60
0
Order By: Relevance
“…The sensors required to properly assess each event will depend on that event's properties. In particular, a event may require several different sensing modalities, such as electro-optical, infra-red, synthetic aperture radar, foliage penetrating radar, and moving target indication radar [45]. One solution would be to equip each UAV with all sensing modalities (or services).…”
Section: Routing For Robotic Network: Team Forming For Cooperatmentioning
confidence: 99%
“…The sensors required to properly assess each event will depend on that event's properties. In particular, a event may require several different sensing modalities, such as electro-optical, infra-red, synthetic aperture radar, foliage penetrating radar, and moving target indication radar [45]. One solution would be to equip each UAV with all sensing modalities (or services).…”
Section: Routing For Robotic Network: Team Forming For Cooperatmentioning
confidence: 99%
“…One of the main problems in using the POMDP formulation is that finding the optimal policy in general is computationally prohibitive. Fortunately, several methods exist for approximating optimal policies in POMDP (see [8] for a review of such methods). In our numerical examples, we have used one of these approximation methods, known as rollout, and have compared our solution with a class of nonadaptive methods described in Section 4.…”
Section: Introductionmentioning
confidence: 99%
“…As i < M, the performance associated with the current decision P i is defined as the averaged "optimal" performance over all possible realization of the resulting current measurement g i . Evaluating (9) through the whole parameter space { g (M −1) , P (M −1) } is often computationally prohibitive due to the large dimensionality.…”
Section: B Non-greedy Adaptive Measurement Designmentioning
confidence: 99%
“…is often referred to as the belief state [9] which captures the sufficient statistics of the system and the interactive environment modeled by a Markov decision process. To further reduce the computational complexity, we optimize the measurement vector over a pre-determined candidate set P = { P 1 · · · P K }.…”
Section: B Non-greedy Adaptive Measurement Designmentioning
confidence: 99%