1998
DOI: 10.1006/jmps.1998.1214
|View full text |Cite
|
Sign up to set email alerts
|

On Learning To Become a Successful Loser: A Comparison of Alternative Abstractions of Learning Processes in the Loss Domain

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

7
49
2

Year Published

2002
2002
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 116 publications
(58 citation statements)
references
References 23 publications
7
49
2
Order By: Relevance
“…For instance, Bereby-Meyer and Erev (1998) found quicker learning in a probability learning experiment when payoffs were 2/ 2 rather than 4/0 or 0/ 4. The natural reference point of zero EV should make it easier to spot that a payoff has moved, particularly if-as was the case in our experiments-participants see a running points total that makes it easy to see whether points are decreasing or increasing.…”
Section: Direction-of-change Effectsmentioning
confidence: 99%
“…For instance, Bereby-Meyer and Erev (1998) found quicker learning in a probability learning experiment when payoffs were 2/ 2 rather than 4/0 or 0/ 4. The natural reference point of zero EV should make it easier to spot that a payoff has moved, particularly if-as was the case in our experiments-participants see a running points total that makes it easy to see whether points are decreasing or increasing.…”
Section: Direction-of-change Effectsmentioning
confidence: 99%
“…5 In such environments, the argument goes, non-learning strategies get poor payoffs because the actors cannot respond to changing payoff structures by changing action. Actors that play an ESS in one situation, but cannot deal with changes to the environment, now do poorly against learners that reach this same ESS in the original situation and can re-adapt when necessary.…”
Section: The Evolution Of Learningmentioning
confidence: 99%
“…I also explored a learning rule outlined by Barrett [3], which I call Barrett Learning. This rule is in some ways similar to Adjustable Reference Point with Truncation learning introduced by Bereby and Erev [5]. Actors using this rule discount past experience compared to more recent experience.…”
Section: Short Term Success and Simulationmentioning
confidence: 99%
See 1 more Smart Citation
“…More sophisticated learning dynamics also allow for punishment and forgetting. They typically do much better than simple Herrnstein reinforcement in games like those discussed in this paper [Barrett and Zollman (2009)], and, saliently, they often much better model the actual behavior of learners [Roth and Erev (1995)] [Bereby-Meyer, Yoella and Erev (1998)]. The methodological thought is that Herrnstein reinforcement learning requires only relatively weak dispositional resources and if it allows for successful coordinated action in a particular context, then one can expect a broad class of more sophisticated reinforcement dynamics to allow for similar success.…”
mentioning
confidence: 99%