2021 IEEE/ACM 29th International Conference on Program Comprehension (ICPC) 2021
DOI: 10.1109/icpc52881.2021.00022
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Method Names to Improve Code Summarization: A Deliberation Multi-Task Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…The method name, which is present in every method, can be seen as an extremely concise summary of the method. Xie et al [57] analyze the method name and code summary in a Java dataset built by Leclair and McMillan [31], and find that on average 50.6% of the words in method names appear in the corresponding summaries, and 21.3% of the words in the summaries appear in the corresponding method names. For about 20% of the methods, all the words in the method names appear in the corresponding summaries.…”
Section: Methods Name Generationmentioning
confidence: 99%
“…The method name, which is present in every method, can be seen as an extremely concise summary of the method. Xie et al [57] analyze the method name and code summary in a Java dataset built by Leclair and McMillan [31], and find that on average 50.6% of the words in method names appear in the corresponding summaries, and 21.3% of the words in the summaries appear in the corresponding method names. For about 20% of the methods, all the words in the method names appear in the corresponding summaries.…”
Section: Methods Name Generationmentioning
confidence: 99%
“…Recently, Neural Machine Translation (NMT) based models have been exploited to generate summaries for code snippets. CodeNN [19] is an early attempt that uses only code token sequences, followed by various approaches that utilize AST [2,15,16,21,22,25], API knowledge [17], type information [4], global context [13,25], reinforcement learning [42,43], multitask and dual learning [44,48,50], and pretrained language models [8].…”
Section: Comment Generationmentioning
confidence: 99%
“…Yao et al [353] and Ye et al [355] used StaQC dataset [354]; it contains more than 119 thousand pairs of question title and code snippet related to sql mined from StackOverflow. Xie et al [345] utilized two existing datasets-one each for Java [176] and Python [42]. Bansal et al [40] evaluated their code summarization technique using a Java dataset of 2.1M Java methods from 28K projects created by LeClair and McMillan [176].…”
Section: Data Collection and Processingmentioning
confidence: 99%
“…Yang et al [352] developed a multi-modal transformer-based code summarization approach for smart contracts. Xie et al [345] designed a novel multi-task learning (mlt) approach for code summarization through mining the relationship between method-code summaries and method names. Bansal et al [40] introduced a project-level encoder dl model for code summarization.…”
Section: Model Trainingmentioning
confidence: 99%