2021 IEEE International Conference on Software Maintenance and Evolution (ICSME) 2021
DOI: 10.1109/icsme52107.2021.00021
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Study on Code Comment Completion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 66 publications
0
8
0
Order By: Relevance
“…Currently, the efforts regard studying the capabilities of Copilot for generating (and reproducing) correct and efficient solutions for fundamental algorithmic problems and comparing Copilot's proposed solutions with those of human programmers on a set of programming tasks [19]. Having the same goal of evaluating GitHub Copilot, an empirical study aims to understand whether different but semantically equivalent natural language descriptions obtain in the same recommended function [20].…”
Section: Related Workmentioning
confidence: 99%
“…Currently, the efforts regard studying the capabilities of Copilot for generating (and reproducing) correct and efficient solutions for fundamental algorithmic problems and comparing Copilot's proposed solutions with those of human programmers on a set of programming tasks [19]. Having the same goal of evaluating GitHub Copilot, an empirical study aims to understand whether different but semantically equivalent natural language descriptions obtain in the same recommended function [20].…”
Section: Related Workmentioning
confidence: 99%
“…A T5 model is trained in two phases: (i) pre-training, in which the model is trained with a self-supervised objective that allows defining a shared knowledge-base useful for a large class of tasks, and fine-tuning, which specializes the model on a downstream task (e.g., language translation). T5 already showed its effectiveness in code-related tasks [13], [18], [28]- [32]. However, its application to the generation of Dockerfiles is novel and still unexplored.…”
Section: Training T5 For Generating Dockerfilesmentioning
confidence: 99%
“…However, its application to the generation of Dockerfiles is novel and still unexplored. As done in previous work [13], [18], we use the smallest T5 version available (T5 small), which is composed of 60M parameters.…”
Section: Training T5 For Generating Dockerfilesmentioning
confidence: 99%
See 1 more Smart Citation
“…In our recent work [44] we empirically investigated the potential of a T5 model when pre-trained and fine-tuned to support four code-related tasks also characterized by text-totext transformations. In particular, we started by pre-training a T5 model using a large dataset consisting of 499,618 English sentences and 1,569,889 source code components (i.e., Java methods).…”
Section: Introductionmentioning
confidence: 99%