2017
DOI: 10.48550/arxiv.1705.09231
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Attribute Machines for Program Generation

Abstract: Recurrent neural networks have achieved remarkable success at generating sequences with complex structures, thanks to advances that include richer embeddings of input and cures for vanishing gradients. Trained only on sequences from a known grammar, though, they can still struggle to learn rules and constraints of the grammar. Neural Attribute Machines (NAMs) are equipped with a logical machine that represents the underlying grammar, which is used to teach the constraints to the neural machine by (i) augmentin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 19 publications
0
9
0
Order By: Relevance
“…This simple process always produces correct code, even when the network does not learn to produce it. In contrast, Amodio et al [16] create a significantly more complex model that aims to learn to enforce deterministic constraints of the code generation, rather than enforcing them on the directly on the output. We further discuss the issue of embedding constraints and problem structure in models vs. learning the constraints in Section 6.…”
Section: Code-generating Probabilistic Models Of Source Codementioning
confidence: 99%
“…This simple process always produces correct code, even when the network does not learn to produce it. In contrast, Amodio et al [16] create a significantly more complex model that aims to learn to enforce deterministic constraints of the code generation, rather than enforcing them on the directly on the output. We further discuss the issue of embedding constraints and problem structure in models vs. learning the constraints in Section 6.…”
Section: Code-generating Probabilistic Models Of Source Codementioning
confidence: 99%
“…Of course, there has been a lot of other research in compiler fuzzing. Several works [28], [29] apply machine learning to improve the fuzzing performance. CodeAlchemist [30] is based on the idea of semantic-aware assembly of JavaScript programs.…”
Section: Related Workmentioning
confidence: 99%
“…In an AST, a program is parsed into a hierarchy of non-terminal and terminal (leaf) nodes based on the syntax of a programming language. To utilize AST for code representation, the simplest way would be to use depth-first search to convert an AST into a sequence [12,54,149]. Other studies proposed DL models (Recursive Neural Networks [267], Tree-LSTM [263] or CNN [185]) to work directly on the hierarchical structure of a parse tree.…”
Section: Deep Encoder Modelsmentioning
confidence: 99%
“…Bash shell Lin et al 2017 [154] CSS and HTML Beltramelli 2017 [21] Seq2seq with grammar constraint Simple C code Amodio et al 2017 [12] Seq2AST Domain-specific language Dong and Lapata 2016 [61] Card-game code Reinforcement learning SQL queries Zhong et al 2017 [286] language to use input-output examples to create a complete program. Differentiable approaches often perform worse than the search-based methods for low-level programming languages (e.g., Assembly) [72].…”
Section: Seq2seqmentioning
confidence: 99%
See 1 more Smart Citation