Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval 2020
DOI: 10.1145/3397271.3401215
|View full text |Cite
|
Sign up to set email alerts
|

Relevance Transformer: Generating Concise Code Snippets with Relevance Feedback

Abstract: Tools capable of automatic code generation have the potential to augment programmer's capabilities. While straightforward code retrieval is incorporated into many IDEs, an emerging area is explicit code generation. Code generation is currently approached as a Machine Translation task, with Recurrent Neural Network (RNN) based encoder-decoder architectures trained on code-description pairs. In this work we introduce and study modern Transformer architectures for this task. We further propose a new model called … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(28 citation statements)
references
References 17 publications
0
27
0
Order By: Relevance
“…(Liguori et al, 2021a)). Both exact match and averaged token level BLEU scores have been extensively used in evaluating code generation models (Liguori et al, 2021a,b;Oda et al, 2015b;Ling et al, 2016;Gemmell et al, 2020). It is becoming increasingly important to note the drawback of using BLEU to evaluate code generation systems.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…(Liguori et al, 2021a)). Both exact match and averaged token level BLEU scores have been extensively used in evaluating code generation models (Liguori et al, 2021a,b;Oda et al, 2015b;Ling et al, 2016;Gemmell et al, 2020). It is becoming increasingly important to note the drawback of using BLEU to evaluate code generation systems.…”
Section: Discussionmentioning
confidence: 99%
“…(Chen Jr and Bunescu, 2019;Jia and Liang, 2016;Yin and Neubig, 2017a;Rabinovich et al, 2017;Ling et al, 2016;Sun et al, 2019a)), and transformer-based architectures (e.g. (Kacupaj et al, 2021;Ferraro and Suominen, 2020;Shen et al, 2019;Sun et al, 2019b;Gemmell et al, 2020;Kusupati and Ailavarapu)). In the late 1970's, Hendrix et al (1978) pioneer the task of interfacing with databases using natural language.…”
Section: Semantic Parsingmentioning
confidence: 99%
See 1 more Smart Citation
“…Ling et al [56] and Yin and Neubig [88] proposed a novel neural architecture for code generation, while Xu et al [84] incorporated pre-training and fine-tuning of a model to generate Python snippets from natural language using the CoNaLa dataset [87]. Furthermore, Gemmell et al [30] used a transformer architecture with relevance feedback for code generation, and reported improvements over state-of-the-art on several datasets. There also exist approaches that perform the reverse task, i.e., generating natural language from code.…”
Section: Neural Machine Translation For Code Generationmentioning
confidence: 99%
“…Most Transformer-based rerankers are applied to individual query-document pairs. Some research explores jointly modeling multiple top retrieved documents in a Transformer architecture for question clarification [11], question answering [10,14] or code generation [8]. The effectiveness of using top retrieved documents in Transformer rerankers remains to be studied.…”
Section: Related Workmentioning
confidence: 99%