1997
DOI: 10.2139/ssrn.41214
|View full text |Cite
|
Sign up to set email alerts
|

Learning in Cournot Oligopoly: An Experiment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

17
201
1
3

Year Published

2000
2000
2015
2015

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 143 publications
(222 citation statements)
references
References 13 publications
17
201
1
3
Order By: Relevance
“…The 'best response' process defined in (12) yields a Markov chain over the state space , whose convergence to a stable equilibrium (the NE profile) cannot be assured globally. 8 It is well known that the introduction of inertia 9 stabilizes the best response dynamics, as shown by Huck et al (1999), whose result can be readily applied to our linear CPR setting. Note that in our experimental design we did not introduce any inertia.…”
Section: Best Response Learningmentioning
confidence: 78%
See 3 more Smart Citations
“…The 'best response' process defined in (12) yields a Markov chain over the state space , whose convergence to a stable equilibrium (the NE profile) cannot be assured globally. 8 It is well known that the introduction of inertia 9 stabilizes the best response dynamics, as shown by Huck et al (1999), whose result can be readily applied to our linear CPR setting. Note that in our experimental design we did not introduce any inertia.…”
Section: Best Response Learningmentioning
confidence: 78%
“…9 Inertia is introduced by assuming that in round with independent probability subject will stick to her previous effort instead of following the best response. Huck et al (1999) demonstrated that the resulting Markov process converges globally in finite time to the NE for any .…”
Section: Imitate the Averagementioning
confidence: 98%
See 2 more Smart Citations
“…Figure 4(a) reveals that the subject simply got lucky. 14 It was a first-time player in the no-history setting, i.e., a player with very little information about the game. The reinforcement algorithm locked in at very low quantities in the range of 10 and the subject roughly played a best response to that, which resulted in an average profit of 2117.…”
Section: Human Tacticsmentioning
confidence: 99%