Code summarization has long been viewed as a challenge in software engineering because of the difficulties of understanding source code and generating natural language. Some mainstream methods combine abstract syntax trees with language models to capture the structural information of the source code and generate relatively satisfactory comments. However, these methods are still deficient in code understanding and limited by the long dependency problem. In this paper, we propose a novel model called Fret, which stands for Functional REinforced Transformer with BERT. The model provides a new way to generate code comments by learning code functionalities and deepening code understanding while alleviating the problem of long dependency. For this purpose, a novel reinforcer is proposed for learning the functional contents of code so that more accurate summaries to describe the code functionalities can be generated. In addition, a more efficient algorithm is newly designed to capture the source code structure. The experimental results show that the effectiveness of our model is remarkable. Fret significantly outperforms all the state-of-the-art methods we examine. It pushes the BLEU-4 score to 24.32 for Java code summarization (14.23% absolute improvement) and the ROUGE-L score to 40.12 for Python. An ablation test is also conducted to further explore the impact of each component of our method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.