Proceedings of the 28th International Conference on Program Comprehension 2020
DOI: 10.1145/3387904.3389268
|View full text |Cite
|
Sign up to set email alerts
|

Improved Code Summarization via a Graph Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
153
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 242 publications
(163 citation statements)
references
References 31 publications
0
153
0
Order By: Relevance
“…Alon et al (2019) represented source code based on AST paths between pairs of tokens. LeClair et al (2020) proposed a model that encoded the AST of source code using graph neural networks. These approaches utilized ASTs to capture structural features, but less considered the sequence characteristics of code in a program language.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Alon et al (2019) represented source code based on AST paths between pairs of tokens. LeClair et al (2020) proposed a model that encoded the AST of source code using graph neural networks. These approaches utilized ASTs to capture structural features, but less considered the sequence characteristics of code in a program language.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, Shido et al (2019); Harer et al (2019); LeClair et al (2020); Scarselli et al (2008) proposed tree-based models to capture the features of the source code. They used the structural information from parse trees but hardly considered the sequence information of code tokens.…”
Section: Introductionmentioning
confidence: 99%
“…LeClair et al (2019) proposes to use both AST sequences and source code sequence as multiple input for model. LeClair et al (2020) proposes to employ GNN over AST structures to better extract structure information. Our work, on the contrast, explores broader context, class-level neighboring functions, to introduce rich information for comment generation.…”
Section: Related Workmentioning
confidence: 99%
“…Previous approaches could be divided into two categories. The first is to employ non-sequential encoders (e.g., TBCNN (Mou et al, 2016), Tree-LSTM (Shido et al, 2019), Tree-Transformer (Harer et al, 2019), Graph Neural Network (Allamanis et al, 2018;Alex et al, 2020;Wang et al, 2021)) to directly model structural inputs. The other is to pre-process structural inputs to apply sequential models on them.…”
Section: Introductionmentioning
confidence: 99%