1996
DOI: 10.1006/game.1996.0088
|View full text |Cite
|
Sign up to set email alerts
|

Boundedly Rational Rule Learning in a Guessing Game

Abstract: We combine Nagel's "step-k" model of boundedly rational players with a "law of effect" learning model. Players begin with a disposition to use one of the step-k rules of behavior, and over time the players learn how the available rules perform and switch to better performing rules. We offer an econometric specification of this dynamic process and fit it to Nagel's experimental data. We find that the rule of learning model vastly outperforms other nested and nonnested learning models. We find strong evidence fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
146
1
1

Year Published

1999
1999
2018
2018

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 175 publications
(154 citation statements)
references
References 31 publications
2
146
1
1
Order By: Relevance
“…The n>2 game and the last mentioned game are dominant solvable. A so-called level-k model has been applied by the above authors to successfully explain the observed behavior (see also Stahl (1996Stahl ( , 1998, or a one parameter hierarchical model by Camerer et al (2004). Weakly dominant strategies are typically chosen in 10-20%.…”
Section: Introductionmentioning
confidence: 99%
“…The n>2 game and the last mentioned game are dominant solvable. A so-called level-k model has been applied by the above authors to successfully explain the observed behavior (see also Stahl (1996Stahl ( , 1998, or a one parameter hierarchical model by Camerer et al (2004). Weakly dominant strategies are typically chosen in 10-20%.…”
Section: Introductionmentioning
confidence: 99%
“…One could presumably unify the models by mapping increasing steps of strategic thinking into increasing degrees of sophistication, but we have not done so. Or calling the thinking steps 'rules' and allowing players to learn in the domain of rules is a way of unifying the two (for example, Stahl, 1996). Since the models are so parsimonious there is no great saving in degrees of freedom by unifying them, but it would be important, both scientifically and practically, to know if there is a close link.…”
Section: Empirical Disciplinementioning
confidence: 99%
“…Self-tuning EWA generalizes some of them (though reinforcement with payoff variability adjustment is different; see Erev et al, 1999). Other approaches include rule learning (Stahl, 1996(Stahl, , 2000, and earlier artificial intelligence (AI) tools such as genetic algorithms or genetic programming to 'breed' rules (see Jehiel, fourth coming). Finally, there are no alternative models of strategic teaching that we know of but this is an important area others should examine.…”
Section: Notesmentioning
confidence: 99%
See 1 more Smart Citation
“…This does not involve feedback that passes through the player's environment. 10 Notwithstanding these fundamental differences, there have been some recent attempts (see Stahl, 1996;Camerer and Ho, 1997) to combine the two. Starting with reinforcement learning, one could assume that a player does not only update the probability of choosing a certain action on the basis of the payoff actually realized, but that he also has enough knowledge about the structure of the game (similar to the assumption made with learning direction theory) to reason what the payoffs would have been for actions not actually chosen.…”
Section: Production: Learning Direction Theorymentioning
confidence: 99%