2021
DOI: 10.1016/j.patter.2021.100273
|View full text |Cite
|
Sign up to set email alerts
|

Neural algorithmic reasoning

Abstract: We present neural algorithmic reasoning-the art of building neural networks that are able to execute algorithmic computation-and provide our opinion on its transformative potential for running classical algorithms on inputs previously considered inaccessible to them.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 58 publications
(117 reference statements)
0
15
0
Order By: Relevance
“…Specifically, our key contribution is to develop an SDP analog of our original LP formulation, and show how to lift LP-based extensions into a corresponding high-dimensional SDP-based extensions. Our general procedure for lifting low-dimensional representations into higher dimensions aligns with the neural algorithmic reasoning blueprint (Veličković & Blundell, 2021), and suggests that classical techniques such as SDPs may be effective tools for combining deep learning with algorithmic processes more generally.…”
Section: Introductionmentioning
confidence: 80%
See 1 more Smart Citation
“…Specifically, our key contribution is to develop an SDP analog of our original LP formulation, and show how to lift LP-based extensions into a corresponding high-dimensional SDP-based extensions. Our general procedure for lifting low-dimensional representations into higher dimensions aligns with the neural algorithmic reasoning blueprint (Veličković & Blundell, 2021), and suggests that classical techniques such as SDPs may be effective tools for combining deep learning with algorithmic processes more generally.…”
Section: Introductionmentioning
confidence: 80%
“…In both machine learning and optimization, it has been observed that high-dimensional representations can make problems "easier". For instance, neural networks rely on high-dimensional internal representations for representational power and to allow information to flow through gradients, and performance suffers considerably when undesirable low-dimensional bottlenecks are introduced into network architectures (Belkin et al, 2019;Veličković & Blundell, 2021). In optimization, lifting to higher-dimensional spaces can make the problem more well-behaved (Goemans & Williamson, 1995;Shawe-Taylor et al, 2004;Du et al, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…To thoroughly assess the generalisation capability with respect to graph size, we generated 9 different test sets -128 graphs each -with increasing number of nodes, namely 16,32,64,96,128,160,192,224,256 nodes. In order to train the network on intermediate steps, we utilise the CLRS benchmark (Veličković et al, 2021) to generate training data.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…This has motivated research on networks that can perform algorithmic reasoning (Graves et al, 2014;Zaremba & Sutskever, 2014;Bieber et al, 2020). Neural networks that accurately represent the semantics of programs could enable a variety of downstream tasks, including program synthesis (Devlin et al, 2017), program analysis (Allamanis et al, 2018), and other algorithmic reasoning tasks (Velickovic & Blundell, 2021).…”
Section: Introductionmentioning
confidence: 99%