2023
DOI: 10.1073/pnas.2221415120
|View full text |Cite
|
Sign up to set email alerts
|

Reward expectations direct learning and drive operant matching in Drosophila

Adithya E. Rajagopalan,
Ran Darshan,
Karen L. Hibbard
et al.

Abstract: Foraging animals must use decision-making strategies that dynamically adapt to the changing availability of rewards in the environment. A wide diversity of animals do this by distributing their choices in proportion to the rewards received from each option, Herrnstein’s operant matching law. Theoretical work suggests an elegant mechanistic explanation for this ubiquitous behavior, as operant matching follows automatically from simple synaptic plasticity rules acting within behaviorally relevant neural circuits… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(10 citation statements)
references
References 83 publications
0
0
0
Order By: Relevance
“…For this proof-of-principle, our ground-truth network architecture (Figure 3A, top) is inspired by recent studies that have successfully mapped observed behaviors to plasticity rules in the mushroom body (MB) of the fruit fly Drosophila melanogaster (Aso & Rubin, 2016; Rajagopalan et al, 2023; Li et al, 2020; Modi et al, 2020; Davis, 2023). In particular, this work indicates that the difference between received and expected reward information is instrumental in mediating synaptic plasticity (Rajagopalan et al, 2023), and that learning and forgetting happen on comparable timescales (Aso & Rubin, 2016).…”
Section: Inferring Plasticity Rules From Behaviormentioning
confidence: 99%
See 4 more Smart Citations

Model-based inference of synaptic plasticity rules

Mehta,
Tyulmankov,
Rajagopalan
et al. 2023
Preprint
Self Cite
“…For this proof-of-principle, our ground-truth network architecture (Figure 3A, top) is inspired by recent studies that have successfully mapped observed behaviors to plasticity rules in the mushroom body (MB) of the fruit fly Drosophila melanogaster (Aso & Rubin, 2016; Rajagopalan et al, 2023; Li et al, 2020; Modi et al, 2020; Davis, 2023). In particular, this work indicates that the difference between received and expected reward information is instrumental in mediating synaptic plasticity (Rajagopalan et al, 2023), and that learning and forgetting happen on comparable timescales (Aso & Rubin, 2016).…”
Section: Inferring Plasticity Rules From Behaviormentioning
confidence: 99%
“…Plasticity occurs exclusively between the input and output layers. We simulate a covariance-based learning rule (Loewenstein & Seung, 2006) known from previous experiments (Rajagopalan et al, 2023). The change in synaptic weight Δ w ij is determined by the presynaptic input x j , and a global reward signal r .…”
Section: Inferring Plasticity Rules From Behaviormentioning
confidence: 99%
See 3 more Smart Citations

Model-based inference of synaptic plasticity rules

Mehta,
Tyulmankov,
Rajagopalan
et al. 2023
Preprint
Self Cite