2021
DOI: 10.48550/arxiv.2104.09340
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Code Structure Guided Transformer for Source Code Summarization

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…The last is embedding ASTs using GNNs [33,37], and here we take GCN as an example. As the number of GCN layers increases, each node can aggregate a more extensive range of information from its neighbors, thus focusing on a broader range of local features.…”
Section: Motivating Examplementioning
confidence: 99%
See 2 more Smart Citations
“…The last is embedding ASTs using GNNs [33,37], and here we take GCN as an example. As the number of GCN layers increases, each node can aggregate a more extensive range of information from its neighbors, thus focusing on a broader range of local features.…”
Section: Motivating Examplementioning
confidence: 99%
“…They used the AST sequences obtained via the SBT method and the embedding representation obtained using GCN to extract the structure of ASTs. • SG-Trans: Gao et al [37] injected local semantic information and global syntactic structure into the self-attentive module in the Transformer as an inductive bias to better capture the hierarchical features of source code.…”
Section: Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…They replace the original positional encoding in the Transformer with the relative positional encoding, which encodes the pairwise relationship between the tokens in the token sequence. As an improvement to Transformer, Gao et al [18] propose to exploit code structural property and introduce constraints to the multi-head self-attention module. Zhang et al [47] and Wei et al [43] both propose a retrieval-based method, which improves summary generation by leveraging the comments of similar code snippets.…”
Section: Related Workmentioning
confidence: 99%
“…Comment synthesis is now an active research area, including many projects such as CodeNN [30], DeepCom [26], Astattgru [40], C BERT [18], Rencos [74], SecNN [42], PLBART [1], CoTexT [54], ProphetNet-X [55], NCS [2], Code2seq [7], Re 2 Com [71], and many more [19,24,25,27,28,38,39,41,49,50,66,67,69,70,72,73]. All these approaches rely on datasets of aligned code-comment pairs.…”
Section: Introductionmentioning
confidence: 99%