2022
DOI: 10.48550/arxiv.2204.12916
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GypSum: Learning Hybrid Representations for Code Summarization

Yu Wang,
Yu Dong,
Xuesong Lu
et al.

Abstract: Code summarization with deep learning has been widely studied in recent years. Current deep learning models for code summarization generally follow the principle in neural machine translation and adopt the encoder-decoder framework, where the encoder learns the semantic representations from source code and the decoder transforms the learnt representations into human-readable text that describes the functionality of code snippets. Despite they achieve the new state-of-the-art performance, we notice that current… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…62 Although the objective of CodeBERT does not include generation tasks, it can be modified by introducing a Transformer decoder for code comment generation. 64 Table 9 presents the results of PyScribe and the baselines. We refer to the baseline performances reported by Choi et al 34 The overall results show that the recent Transformer-based approaches NeuralCodeSum 15 and mAST+GCN 34 outperform the previous works based on RNNs.…”
Section: Performance On Other Datasetsmentioning
confidence: 99%
“…62 Although the objective of CodeBERT does not include generation tasks, it can be modified by introducing a Transformer decoder for code comment generation. 64 Table 9 presents the results of PyScribe and the baselines. We refer to the baseline performances reported by Choi et al 34 The overall results show that the recent Transformer-based approaches NeuralCodeSum 15 and mAST+GCN 34 outperform the previous works based on RNNs.…”
Section: Performance On Other Datasetsmentioning
confidence: 99%
“…Alon et al [ 15 ] extracted multiple tree paths from root node to terminal node based on ASTs as the code representations as structural information for summary generation. Currently, Graph Neural Networks (GNNs) achieve great performance on graph data (e.g., protein prediction, knowledge graph), so researchers [ 17 , 18 ] consider constructing AST into a graph by adding additional edge relations and utilizing GNNs to capture structural information achieving great results.…”
Section: Introductionmentioning
confidence: 99%