Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1041
|View full text |Cite
|
Sign up to set email alerts
|

A Syntactic Neural Model for General-Purpose Code Generation

Abstract: We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing datadriven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an eff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
506
1
2

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 467 publications
(511 citation statements)
references
References 35 publications
2
506
1
2
Order By: Relevance
“…This is opposite to the conclusion achieved by Liu et al [23]. One possible reason is that the tasks between ours and Liu et al's [23] are different, i.e., Liu et al aim at producing texts based on code, while we focus on generating texts for dialogues and modeling code is different from modeling dialogue texts [53], [54]. The higher BLEU-4 score of the proposed RRGen model than that of the NMT model explains that the response generated by the RRGen model is more similar to developers' response than the response generated by the NMT model.…”
Section: Evaluation Using An Automatic Metriccontrasting
confidence: 75%
“…This is opposite to the conclusion achieved by Liu et al [23]. One possible reason is that the tasks between ours and Liu et al's [23] are different, i.e., Liu et al aim at producing texts based on code, while we focus on generating texts for dialogues and modeling code is different from modeling dialogue texts [53], [54]. The higher BLEU-4 score of the proposed RRGen model than that of the NMT model explains that the response generated by the RRGen model is more similar to developers' response than the response generated by the NMT model.…”
Section: Evaluation Using An Automatic Metriccontrasting
confidence: 75%
“…Neural semantic parsing approaches have been gaining rising attention in recent years, eschewing the need for extensive feature engineering (Jia and Liang, 2016;Ling et al, 2016;Xiao et al, 2016). Some efforts have been made to utilize the syntax of logical forms (Rabinovich et al, 2017;Krishnamurthy et al, 2017;Cheng et al, 2017;Yin and Neubig, 2017). For example, Dong and Lapata (2016) and Alvarez-Melis and Jaakkola (2017) leverage an attention-based encoder-decoder framework to translate a natural language question to tree-structured logical form.…”
Section: Related Workmentioning
confidence: 99%
“…Particularly for IFTTT rules, Quirk et al collected 114,408 IF-THEN rules and their natural-language descriptions from the IFTTT website, and demonstrated the possibility of producing IF-THEN rules based on corresponding descriptive text (Quirk et al, 2015). Several follow-up work that proposed different approaches such as attention-enhanced encoder-decoder model (Dong and Lapata, 2016), using latent attention (Liu et al, 2016), or syntactic neural model (Yin and Neubig, 2017) further improved the accuracy of IFTTT rule generation. Under the context of conversational assistance, Chaurasia et al created an automated dialog system that generates IFTTT rules by having a conversation with users (Chaurasia and Mooney, 2017).…”
Section: Automatic If-then Rules Generationmentioning
confidence: 99%