2020
DOI: 10.1109/access.2020.3011744
|View full text |Cite
|
Sign up to set email alerts
|

Fret: Functional Reinforced Transformer With BERT for Code Summarization

Abstract: Code summarization has long been viewed as a challenge in software engineering because of the difficulties of understanding source code and generating natural language. Some mainstream methods combine abstract syntax trees with language models to capture the structural information of the source code and generate relatively satisfactory comments. However, these methods are still deficient in code understanding and limited by the long dependency problem. In this paper, we propose a novel model called Fret, which… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 40 publications
(34 citation statements)
references
References 38 publications
0
29
0
Order By: Relevance
“…For each API review aspect, we evaluate the performance in terms of five evaluation metrics (i.e., P, R, F 1, M CC, and AU C) as introduced in Section III-E. RQ1 Can pre-trained transformer-based models achieve better performance than the state-of-the-art approach which is based on traditional machine learning models? Motivation Previous studies have shown the great potential of pre-trained transformer-based models on many software engineering tasks, e.g., sentiment analysis for software data [9] and code summarization [12]. However, the efficacy of the pre-trained transformer-based models for various types of For the summative result in Table IV, we calculate the arithmetic average of the used evaluation metrics of each approach across all the aspects as avg.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…For each API review aspect, we evaluate the performance in terms of five evaluation metrics (i.e., P, R, F 1, M CC, and AU C) as introduced in Section III-E. RQ1 Can pre-trained transformer-based models achieve better performance than the state-of-the-art approach which is based on traditional machine learning models? Motivation Previous studies have shown the great potential of pre-trained transformer-based models on many software engineering tasks, e.g., sentiment analysis for software data [9] and code summarization [12]. However, the efficacy of the pre-trained transformer-based models for various types of For the summative result in Table IV, we calculate the arithmetic average of the used evaluation metrics of each approach across all the aspects as avg.…”
Section: Discussionmentioning
confidence: 99%
“…In recent years, pre-trained transformer-based models have achieved exceptional performance in many tasks and areas including the software engineering domain [9]- [12]. For example, Zhang et al conduct an empirical study on benchmarking four pre-trained transformer-based models (BERT [13], RoBERTa [14], ALBERT [15], and XLNet [16]) for sentiment analysis on six software repositories (e.g., code reviews) [9].…”
Section: Introductionmentioning
confidence: 99%
“…The machine translation was significantly enhanced because they used the transformer architecture. Ahmad et al [32] and Wang et al [33] also used transformer-based deep learning models to complete translation tasks, and relative to the findings of previous studies, they improved the accuracy of the generated results. By contrast, Dai et al [34] discovered that when a transformer-based deep learning model processes long text, the text is truncated into multiple fixed-length fields.…”
Section: Quantum Machine Learningmentioning
confidence: 94%
“…Feng et al [31] proposed the bi-modal pre-training model CodeBERT based on Transformer neural architecture for programming language and natural language. Wang et al [32] proposed a BERT-based functional enhanced transformer model, they proposed a new enhancer to generate higher-quality code summarization.…”
Section: B Deep Learning-based Text Summarization and Source Code Sum...mentioning
confidence: 99%