The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2020
DOI: 10.48550/arxiv.2006.08698
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Extrapolatable Relational Reasoning With Comparators in Low-Dimensional Manifolds

Abstract: While modern deep neural architectures generalise well when test data is sampled from the same distribution as training data, they fail badly for cases when the test data distribution differs from the training distribution even along a few dimensions. This lack of out-of-distribution generalisation is increasingly manifested when the tasks become more abstract and complex, such as in relational reasoning. In this paper we propose a neuroscience-inspired inductive-biased module that can be readily amalgamated w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…However, these models typically require very large training sets (on the order of 10 6 training examples), and generally fail to generalize outside of the very specific conditions under which they are trained. For example, state-of-the-art performance on the extrapolation regime of the PGM dataset, in which test problems contain feature values outside the range of those observed in the training set, is currently 25.9% [38], and state-of-the-art performance on other out-of-distribution generalization regimes (held-out shape-color, held-out line-type, etc.) is comparably poor [2,37].…”
Section: Raven's Progressive Matrices and Deep Learningmentioning
confidence: 99%
“…However, these models typically require very large training sets (on the order of 10 6 training examples), and generally fail to generalize outside of the very specific conditions under which they are trained. For example, state-of-the-art performance on the extrapolation regime of the PGM dataset, in which test problems contain feature values outside the range of those observed in the training set, is currently 25.9% [38], and state-of-the-art performance on other out-of-distribution generalization regimes (held-out shape-color, held-out line-type, etc.) is comparably poor [2,37].…”
Section: Raven's Progressive Matrices and Deep Learningmentioning
confidence: 99%