2020
DOI: 10.1101/2020.09.12.294512
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A new model of decision processing in instrumental learning tasks

Abstract: Learning and decision making are interactive processes, yet cognitive modelling of error-driven learning and decision making have largely evolved separately. Recently, evidence accumulation models (EAMs) of decision making and reinforcement learning (RL) models of error-driven learning have been combined into joint RL-EAMs that can in principle address these interactions. However, we show that the most commonly used combination, based on the diffusion decision model (DDM) for binary choice, consistently fails … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
15
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 101 publications
0
15
1
Order By: Relevance
“…DDMs fall under the umbrella of evidence accumulation models which describe the process of action selection and resulting reaction times as a biased random walk with a drift and white noise. This approach has been extended to multi-choice tasks in so-called race diffusion models, where instead of having one accumulator as in the DDM, each available choice option is associated with a different accumulator ( Fontanesi et al, 2019 ; Miletić et al, 2021 ).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…DDMs fall under the umbrella of evidence accumulation models which describe the process of action selection and resulting reaction times as a biased random walk with a drift and white noise. This approach has been extended to multi-choice tasks in so-called race diffusion models, where instead of having one accumulator as in the DDM, each available choice option is associated with a different accumulator ( Fontanesi et al, 2019 ; Miletić et al, 2021 ).…”
Section: Discussionmentioning
confidence: 99%
“…The evidence accumulation reaction time modeling approach has been recently combined with reinforcement learning models to provide joint instrumental learning and reaction time models ( Milosavljevic et al, 2010 ; Pedersen et al, 2017 ; Fontanesi et al, 2019 ; Miletić et al, 2021 ). Here, usually internal variables from the reinforcement learning agent, particularly expected reward (Q-values) are mapped to variables in the evidence accumulator model, such as the drift rate, e.g.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The evidence from successive samples is summed, or accumulated through time until a criterion amount of evidence for one response alternative is accrued, initiating a behavioral response for that alternative. The success of this theoretical framework is reflected in the breadth of domains the models have been applied to, such as evaluating the optimality of decision policies (Bogacz et al, 2006;Drugowitsch et al, 2012;Evans and Brown, 2017;Evans et al, 2018;Starns and Ratcliff, 2012), stop signal paradigms (Matzke et al, 2013(Matzke et al, , 2017a, Go/No-Go paradigms (Gomez et al, 2007;Ratcliff et al, 2018), multi-attribute and many-alternatives choice Diederich, 2019, 2021;Kvam, 2019;Roe et al, 2001;Trueblood et al, 2014;Usher and McClelland, 2004), learning strategies (Fontanesi et al, 2019;Miletić et al, 2021;Pedersen et al, 2017;Sewell et al, 2019;Sewell and Stallman, 2020), attentional choice (Krajbich et al, 2010(Krajbich et al, , 2012Gluth et al, 2020), continuous responses (Ratcliff, 2018;Smith, 2016), neural processes (Gold and Shadlen, 2007), and so on.…”
Section: Introductionmentioning
confidence: 99%