2021
DOI: 10.1109/tse.2021.3128234
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Study on the Usage of Transformer Models for Code Completion

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 44 publications
(39 citation statements)
references
References 60 publications
0
36
0
Order By: Relevance
“…The goal of this study is to understand the extent to which DL-based code recommenders are prone to suggest code snippets being clones of instances present in the training set. The context is represented by the four training datasets described in Section 2.2 and by the snippets of code generated by a state-of-the-art code recommender built on top of the Text-To-Text-Transfer-Transformer (T5) [12].…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…The goal of this study is to understand the extent to which DL-based code recommenders are prone to suggest code snippets being clones of instances present in the training set. The context is represented by the four training datasets described in Section 2.2 and by the snippets of code generated by a state-of-the-art code recommender built on top of the Text-To-Text-Transfer-Transformer (T5) [12].…”
Section: Methodsmentioning
confidence: 99%
“…Indeed, while we have access to Copilot, the dataset used to train it is not publicly available, making it impossible to check whether the code it recommends is a clone of the instances in its training set. For this reason, we focused on another DL-based code recommender recently proposed by Ciniselli et al [12].…”
Section: Study Context: Dl-based Code Recommendermentioning
confidence: 99%
See 3 more Smart Citations