2021
DOI: 10.1016/j.neuron.2021.02.004
|View full text |Cite
|
Sign up to set email alerts
|

Neural state space alignment for magnitude generalization in humans and recurrent networks

Abstract: A prerequisite for intelligent behaviour is to understand how stimuli are related and to generalise this knowledge across contexts. Generalisation can be challenging when relational patterns are shared across contexts but exist on different physical scales. Here, we studied neural representations in humans and recurrent neural networks performing a magnitude comparison task, for which it was advantageous to generalise concepts of "more" or "less" between contexts. Using multivariate analysis of human brain sig… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

13
44
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 56 publications
(57 citation statements)
references
References 44 publications
(51 reference statements)
13
44
0
Order By: Relevance
“…Bayesian inference models in which latent causes are inferred and used to group together experiences (Gershman and Niv, 2010, Gershman et al, 2015, Sanders et al, 2020, Niv, 2019 could also be a means that participants learn to group different contexts together in practice. Another possibility is that neural geometry actively represents the relational organisation of task elements (Bernardi et al, 2020, Luyckx et al, 2019, Sheahan et al, 2021. However, our encoding model did not find a smoothly-varying relationship between neural coding and probability, which seems like it would follow naturally from such a representation.…”
Section: Discussionmentioning
confidence: 76%
“…Bayesian inference models in which latent causes are inferred and used to group together experiences (Gershman and Niv, 2010, Gershman et al, 2015, Sanders et al, 2020, Niv, 2019 could also be a means that participants learn to group different contexts together in practice. Another possibility is that neural geometry actively represents the relational organisation of task elements (Bernardi et al, 2020, Luyckx et al, 2019, Sheahan et al, 2021. However, our encoding model did not find a smoothly-varying relationship between neural coding and probability, which seems like it would follow naturally from such a representation.…”
Section: Discussionmentioning
confidence: 76%
“…Under suitable assumptions about noise, this scaling effect may provide a natural explanation for the scalar variability in time interval judgments ( 95 ) . Similarly, this invariance may allow a criterion for categorical judgments to readily generalize to new stimuli ( 96 ) . In our own work, the rapid adjustments of the neural speed in the RSGadapt learning experiment may have benefited from this representational invariance to perform rapid directed explorations in the neural space (e.g.…”
Section: Discussionmentioning
confidence: 99%
“…The comparison of two numbers is a task often considered in cognitive science (see e.g., (Sheahan et al, 2021) for a recent example). We wondered if an SNN could learn to perform the comparison task on expressions with numbers encoded as bit strings.…”
Section: Spike-based Symbolic Computation On Bit Stringsmentioning
confidence: 99%
“…We performed two experiments -both of them show that rather small, generic SNNs can be trained to perform demanding sequence processing tasks. In the first experiment, we considered the number comparison task described above, a task that has often been used to study number processing in cognitive science (Sheahan et al, 2021). For simplicity, we used binary numbers instead of previously described Arabic numbers, since the same sequence processing rules for comparison also apply when comparing numbers in any base.…”
Section: Symbolic Computationmentioning
confidence: 99%