2018
DOI: 10.48550/arxiv.1811.12889
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Systematic Generalization: What Is Required and Can It Be Learned?

Abstract: Numerous models for grounded language understanding have been recently proposed, including (i) generic models that can be easily adapted to any given task and (ii) intuitively appealing modular models that require background knowledge to be instantiated. We compare both types of models in how much they lend themselves to a particular form of systematic generalization. Using a synthetic VQA test, we evaluate which models are capable of reasoning about all possible object pairs after training on only a small sub… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
33
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(34 citation statements)
references
References 22 publications
1
33
0
Order By: Relevance
“…A number of recent approaches involve generating synthetic datasets to evaluate compositional generalization of neural models [10,11,13,28,29,30,31,32]. For instance, [31] proposed CLOSURE, a set of unseen testing splits for the CLEVR dataset [10] which contains synthetically generated natural-looking questions about 3D geometric objects.…”
Section: Related Workmentioning
confidence: 99%
“…A number of recent approaches involve generating synthetic datasets to evaluate compositional generalization of neural models [10,11,13,28,29,30,31,32]. For instance, [31] proposed CLOSURE, a set of unseen testing splits for the CLEVR dataset [10] which contains synthetically generated natural-looking questions about 3D geometric objects.…”
Section: Related Workmentioning
confidence: 99%
“…Lake and Baroni (2017) or Loula, Baroni, and Lake (2018)). This happens because neural networks often latch on to dataset-specific regularities instead of distilling syntactic rules in form of logical formulas (Bahdanau et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…To test the generalization properties of different models Bahdanau et al (2018) created the Spatial Queries on Object Pairs (SQOOP) data set, a visual question answering task, where the main challenge is the necessity to generalize to previously unseen combination of known objects and relations. This task is easy for a human to do since we can easily decide whether the statement "there is a golfball under the hat" is true or not even if we have never seen the combination of golf-ball and hat in one sentence before.…”
Section: Visual Question Answeringmentioning
confidence: 99%
“…Later, Socher et al (2013) shows their effectiveness on sentiment analysis tasks. Recent work has demonstrated that recursive composition of sentences is crucial to systematic generalisation (Bowman et al, 2015;Bahdanau et al, 2018). also demonstrate that architectures like handle syntax-sensitive dependencies better for language-related tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Despite being successful in language generation tasks, recurrent neural networks (RNNs, Elman (1990)) fail at tasks that explicitly require and test compositional behavior (Lake and Baroni, 2017;Loula et al, 2018). In particular, Bowman et al (2015), and later Bahdanau et al (2018) give evidence that, by exploiting the appropriate compositional structure of the task, models can generalize better to out-of-distribution test examples. Results from Andreas et al (2016) also indicate that recursively composing smaller modules results in better representations.…”
Section: Introductionmentioning
confidence: 99%

Ordered Memory

Shen,
Tan,
Hosseini
et al. 2019
Preprint
Self Cite