2020
DOI: 10.1101/2020.08.06.239533
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Confidence-controlled Hebbian learning efficiently extracts category membership from stimuli encoded in view of a categorization task

Abstract: In experiments on perceptual decision-making, individuals learn a categorization task through trial-and-error protocols. We explore the capacity of a decision-making attractor network to learn a categorization task through reward-based, Hebbian type, modifications of the weights incoming from the stimulus encoding layer. For the latter, we assume a standard layer of a large number of stimulus specific neurons. Within the general framework of Hebbian learning, authors have hypothesized that the learning rate is… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 74 publications
(105 reference statements)
0
3
0
Order By: Relevance
“…Importantly, our theoretical analysis provides foundations for biological mechanism of surprise computation and surprise-modulated learning (Berlemont & Nadal, 2021;Frémaux & Gerstner, 2016;Iigaya, 2016;Soltani & Izquierdo, 2019). For example, it has been argued that the computation of observation-mismatch surprise measures is biologically more plausible than more abstract measures such as Shannon surprise (Iigaya, 2016).…”
Section: Summary and Intermediate Discussion (I)mentioning
confidence: 96%
See 1 more Smart Citation
“…Importantly, our theoretical analysis provides foundations for biological mechanism of surprise computation and surprise-modulated learning (Berlemont & Nadal, 2021;Frémaux & Gerstner, 2016;Iigaya, 2016;Soltani & Izquierdo, 2019). For example, it has been argued that the computation of observation-mismatch surprise measures is biologically more plausible than more abstract measures such as Shannon surprise (Iigaya, 2016).…”
Section: Summary and Intermediate Discussion (I)mentioning
confidence: 96%
“…(see Fiser et al, 2010;Knill and Pouget, 2004;Soltani and Wang, 2010 for examples of neural models of probabilistic inference), 'how can surprisemodulated adaptive learning be implemented at the level of synaptic plasticity?' (Berlemont & Nadal, 2021;Gerstner et al, 2018;Iigaya, 2016;Illing et al, 2021), and 'how can surprise-seeking exploration strategies be implemented in the brain?' (see Basanisi et al, 2020 for an example).…”
Section: Discussionmentioning
confidence: 99%
“…Depending on the constraints specific to the system under consideration, minimization of the cost leads to a neural code such that F code (x) is some increasing function of F cat (x). For some constraints one gets F code (x) ∝ F cat (x) as optimum, but other constraints may lead to other relationships -see Bonnasse-Gahot and Nadal (2008; Berlemont and Nadal (2020). Efficient coding with respect to optimal classification is thus obtained by essentially matching the two metrics.…”
Section: Categorical Perception: Empirical and Theoretical Studiesmentioning
confidence: 99%