2014
DOI: 10.1016/j.jet.2013.09.005
|View full text |Cite
|
Sign up to set email alerts
|

A constructive study of Markov equilibria in stochastic games with strategic complementarities

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
25
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(26 citation statements)
references
References 35 publications
1
25
0
Order By: Relevance
“…All these papers impose special conditions on the payoff functions and state transitions 4. Additional ergodic properties were obtained in[10] under stronger conditions 5. An error in the relevant example of[27] was pointed out in[28], and a new example was presented therein.…”
mentioning
confidence: 99%
“…All these papers impose special conditions on the payoff functions and state transitions 4. Additional ergodic properties were obtained in[10] under stronger conditions 5. An error in the relevant example of[27] was pointed out in[28], and a new example was presented therein.…”
mentioning
confidence: 99%
“…These alternatives provide direct characterization of specific sub-game perfect equilibria in certain settings. Examples include Herings and Peeters [11] who provide a homotopy method for computing stationary equilibria in dynamic games and Balbus et al [4] who show how to calculate extremal Markov equilibria in stochastic games with complementarities. We mention also Feng et al [9] who use a similar approach to us to characterize equilibria in economies with distortions and taxes.…”
Section: Literaturementioning
confidence: 99%
“…with an inherited pair of state variables (s t , k t ) ∈ S × K , where S is finite and K ⊂ R N is compact. 4 The former is a shock and evolves according to a Markov chain with transition ; the latter is an endogenous state whose evolution is defined below. On entering period t, the agents simultaneously select a profile of actions a t = {a i t } I i=1 , with each agent i choosing his or her action a i t from a compact set A i (s t , k t ) ⊂ R D .…”
Section: The Gamementioning
confidence: 99%
“…They have been introduced in a seminal paper by Shapley [47], and they can be applied, e.g., in industrial organization [23,43], taxation [44], fish wars, stochastic growth models, communication networks, queues, and hiding and searching for army forces; see the references in [5,9,21,25]. The stochastic game model extends both Markov decision processes (MDPs) which have only a single decision maker and repeated games where the players encounter the same game over and over again.…”
Section: Introductionmentioning
confidence: 99%
“…Sleet and Yeltekin [48], Yeltekin [53] compute correlated subgame-perfect equilibria that need not be Markovian using lower and upper bounds. Balbus et al [9] provide a constructive method for finding stationary Markov strategies in uncountable state spaces where APS-type methods may fail. Feng et al [24] find Markov equilibria in a model with short-run players.…”
Section: Introductionmentioning
confidence: 99%