2018
DOI: 10.48550/arxiv.1803.11432
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Viscosity Approach to Stochastic Differential Games of Control and Stopping Involving Impulsive Control

David Mguni

Abstract: This paper analyses a stochastic differential game of control and stopping in which one of the players modifies a diffusion process using impulse controls, an adversary then chooses a stopping time to end the game. The paper firstly establishes the regularity and boundedness of the upper and lower value functions from which an appropriate variant of the dynamic programming principle (DPP) is derived. It is then proven that the upper and lower value functions coincide so that the game admits a value and that th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 24 publications
(43 reference statements)
0
7
0
Order By: Relevance
“…In continuous-time optimal control theory [24], problems in which the agent faces a cost for each action are tackled with a form a policy known as impulse control [22,19,2]. In impulse control frameworks, the dynamics of the system are modified through a sequence of discrete actions or bursts chosen at times that the agent chooses to apply the control policy.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In continuous-time optimal control theory [24], problems in which the agent faces a cost for each action are tackled with a form a policy known as impulse control [22,19,2]. In impulse control frameworks, the dynamics of the system are modified through a sequence of discrete actions or bursts chosen at times that the agent chooses to apply the control policy.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, we tackle this problem by developing an RL framework for finding both an optimal criterion to determine whether or not to execute actions as well as learning optimal actions. A key component of our framework is a novel combination of RL with a form of policy known as impulse control [22,19]. This enables the agent to determine the appropriate points to perform an action as well as the optimal action itself.…”
Section: Introductionmentioning
confidence: 99%
“…Generator adaptively guides the agents' exploration and behaviour towards coordination and maximal joint performance. A pivotal feature of LIGS is the novel combination of RL and switching controls Bayraktar & Egami (2010); Mguni (2018) which enables it to determine the best set of states to learn to add intrinsic rewards while disregarding less useful states. This enables Generator to quickly learn how to set intrinsic rewards that guide the agents during their learning process.…”
Section: Introductionmentioning
confidence: 99%
“…Our setup is related to stochastic differential games with impulse control[23,9]. However, our Markov Game (MG) differs markedly since it is nonzero-sum, an agent assumes control and is a discrete-time treatment.…”
mentioning
confidence: 99%