2022
DOI: 10.48550/arxiv.2209.07431
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Compositional generalization through abstract representations in human and artificial neural networks

Abstract: Humans have a remarkable ability to rapidly generalize to new tasks that is difficult to reproduce in artificial learning systems. Compositionality has been proposed as a key mechanism supporting generalization in humans, but evidence of its neural implementation and impact on behavior is still scarce. Here we study the computational properties associated with compositional generalization in both humans and artificial neural networks (ANNs) on a highly compositional task. First, we identified behavioral signat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 30 publications
0
10
0
Order By: Relevance
“…By contrast, we observed that neural manifolds representing space were highly aligned across contexts in most brain regions. This resembles the “neural structure alignment” that has recently been reported to accompany decision tasks in both humans and monkeys, whereby contexts sharing common structure are represented with parallel neural geometries, potentially because this allows a decoder trained in one context to be generalised to the other [4651].…”
Section: Discussionmentioning
confidence: 99%
“…By contrast, we observed that neural manifolds representing space were highly aligned across contexts in most brain regions. This resembles the “neural structure alignment” that has recently been reported to accompany decision tasks in both humans and monkeys, whereby contexts sharing common structure are represented with parallel neural geometries, potentially because this allows a decoder trained in one context to be generalised to the other [4651].…”
Section: Discussionmentioning
confidence: 99%
“…How can a neural or biological network efficiently encode multiple variables simultaneously 14,28 ? One solution is to encode variables in an abstract format so they can be reused in novel situations to facilitate generalization and compositionality 21,[29][30][31][32][33] . Here, we show that in the human brain, such a disentangled representation emerged as a function of learning to perform inference in our task.…”
Section: Discussionmentioning
confidence: 99%
“…the ability to conceptualize prior experience in terms of components that can be re-configured in a novel situation [167,168]. More broadly, compositional generalization has long been understood to be a crucial component for human-like learning and generalization [163], in large part due to the diversity and breadth of its applications: compositional generalization often involves the composition of many rules, relations, or attributes [163,[169][170][171]. The comparative simplicity of TI enabled us to identify how minimally structured learning systems can implement the inductive biases needed for this task.…”
Section: Discussionmentioning
confidence: 99%