2018
DOI: 10.1109/tac.2017.2755366
|View full text |Cite
|
Sign up to set email alerts
|

Safe Markov Chains for ON/OFF Density Control With Observed Transitions

Abstract: This paper presents a convex optimization approach to control the density distribution of autonomous mobile agents with two control modes: ON and OFF. The main new characteristic distinguishing this model from standard Markov decision models is the existence of the ON control mode and its observed actions. When an agent is in the ON mode, it can measure the instantaneous outcome of one of the actions corresponding to the ON mode and decides whether it should take this action or not based on this new observatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 44 publications
0
4
0
Order By: Relevance
“…The edge label y represents the Euclidean distance between the centroids of the subregions. The probabilistic densities of the robots in the subregions are governed by a time-varying Markov chain [21]. Figure 6: The swarm of 72 robots in the 9 sub-regions.…”
Section: Case Studymentioning
confidence: 99%
See 1 more Smart Citation
“…The edge label y represents the Euclidean distance between the centroids of the subregions. The probabilistic densities of the robots in the subregions are governed by a time-varying Markov chain [21]. Figure 6: The swarm of 72 robots in the 9 sub-regions.…”
Section: Case Studymentioning
confidence: 99%
“…The edge label y represents the Euclidean distance between the centroids of the subregions. The probabilistic densities of the robots in the subregions are governed by a time-varying Markov chain [21]. We randomly generate graph-temporal trajectories and randomly choose 10 from them that satisfy the following constraint: whenever the probabilistic density of a subregion reaches above 1/8, then for the next 2 time units there always exists at least one neighbor subregion within distance of 1 with probabilistic density below 1/9.…”
Section: Case Studymentioning
confidence: 99%
“…MDPs are widely used in applications such as motion planning [18]. In [19], [20], constraints are imposed on the state probability density function of an MDP under control. Probabilistic invariance, as developed in this paper, can be used for such control systems to characterize the invariant region of the state space.…”
Section: A Background and Motivationmentioning
confidence: 99%
“…Applications: Both homogeneous [277]- [284] and inhomogeneous MC algorithms [276], [285]- [287] algorithms can be used for pattern formation, coverage, area exploration, and goal searching. Additional applications include multi-agent surveillance [288], coverage [289], [290], and task allocation [291], [292].…”
Section: A Markov Chain (Mc) Based Algorithmmentioning
confidence: 99%