2021
DOI: 10.48550/arxiv.2108.01585
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Empirical Study on the Usage of Transformer Models for Code Completion

Matteo Ciniselli,
Nathan Cooper,
Luca Pascarella
et al.

Abstract: Code completion aims at speeding up code writing by predicting the next code token(s) the developer is likely to write. Works in this field focused on improving the accuracy of the generated predictions, with substantial leaps forward made possible by deep learning (DL) models. However, code completion techniques are mostly evaluated in the scenario of predicting the next token to type, with few exceptions pushing the boundaries to the prediction of an entire code statement. Thus, little is known about the per… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 54 publications
0
1
0
Order By: Relevance
“…Dibia et al [34] and Vale et al [35] developed a usable library for question answering with contextual query expansion and a question-answering assistant for software development using a transformer-based language model, respectively. Ciniselli et al [36] performed an empirical study on the usage of Transformer Models for code search and completion.…”
Section: B Api Resource Retrievalmentioning
confidence: 99%
“…Dibia et al [34] and Vale et al [35] developed a usable library for question answering with contextual query expansion and a question-answering assistant for software development using a transformer-based language model, respectively. Ciniselli et al [36] performed an empirical study on the usage of Transformer Models for code search and completion.…”
Section: B Api Resource Retrievalmentioning
confidence: 99%