2020
DOI: 10.48550/arxiv.2007.03629
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Strong Generalization and Efficiency in Neural Programs

Abstract: We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction. By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes, achieving strong generalization. Moreover, by using reinforcement learning, we optimize for program efficiency metrics, and discover new algorithms that surpass the teacher used in imitation. With this, our app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…Neural SFEs are heavily inspired by the Goemans-Williamson (Goemans & Williamson, 1995) algorithm and other SDP techniques (Iguchi et al, 2015), which lift problems onto higher dimensional spaces, solve them, and then project back down. Our approach to lifting set functions to high dimensions is motivated by the algorithmic alignment principle (Xu et al, 2019): neural networks whose computations emulate classical algorithms often generalize better with improved sample complexity Li et al, 2020;Xu et al, 2019). Emulating algorithmic and logical operations is the focus of Neural Algorithmic Reasoning (Veličković et al, 2019;Dudzik & Veličković, 2022;Deac et al, 2021) and work on knowledge graphs (Hamilton et al, 2018;Ren et al, 2019;Arakelyan et al, 2020), which also emphasize operating in higher dimensions.…”
Section: Related Workmentioning
confidence: 99%
“…Neural SFEs are heavily inspired by the Goemans-Williamson (Goemans & Williamson, 1995) algorithm and other SDP techniques (Iguchi et al, 2015), which lift problems onto higher dimensional spaces, solve them, and then project back down. Our approach to lifting set functions to high dimensions is motivated by the algorithmic alignment principle (Xu et al, 2019): neural networks whose computations emulate classical algorithms often generalize better with improved sample complexity Li et al, 2020;Xu et al, 2019). Emulating algorithmic and logical operations is the focus of Neural Algorithmic Reasoning (Veličković et al, 2019;Dudzik & Veličković, 2022;Deac et al, 2021) and work on knowledge graphs (Hamilton et al, 2018;Ren et al, 2019;Arakelyan et al, 2020), which also emphasize operating in higher dimensions.…”
Section: Related Workmentioning
confidence: 99%
“…General reasoning systems, however, need to be able to expand beyond this type of generalization. OOD generalization (Li et al, 2020) is paramount, as generally one can not control the distribution a model will face over time when deployed.…”
Section: Motivationmentioning
confidence: 99%
“…Neural program induction and synthesis. Program induction methods [20,[24][25][26][27][28][29][30][31][32][33][34][35][36] aim to implicitly induce the underlying programs to mimic the behaviors demonstrated in given task specifications such as input/output pairs or expert demonstrations. On the other hand, program synthesis methods [16-19, 21, 37-58] explicitly synthesize the underlying programs and execute the programs to perform the tasks from task specifications such input/output pairs, demonstrations, language instructions.…”
Section: Related Workmentioning
confidence: 99%