2021
DOI: 10.1016/j.jss.2021.111036
|View full text |Cite
|
Sign up to set email alerts
|

SeCNN: A semantic CNN parser for code comment generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 34 publications
(8 citation statements)
references
References 19 publications
0
8
0
Order By: Relevance
“…Haiduc et al [18] used only lexical information using code tokens for comment generation. Hybrid-DeepCom [11], ComFormer [7], SeCNN [24], and SeTransformer [4] use the source code structure information as well as the source code in comment generation. These methods use lexical and syntactic information together.…”
Section: A Source Code Information 1) Code Sequencementioning
confidence: 99%
See 1 more Smart Citation
“…Haiduc et al [18] used only lexical information using code tokens for comment generation. Hybrid-DeepCom [11], ComFormer [7], SeCNN [24], and SeTransformer [4] use the source code structure information as well as the source code in comment generation. These methods use lexical and syntactic information together.…”
Section: A Source Code Information 1) Code Sequencementioning
confidence: 99%
“…We use three metrics, BLEU and METEOR, which are popular performance metrics in NMT. These measures have also been widely used in previous code comment generation studies [4], [6], [7], [11], [24], [34].…”
Section: Construct Threatsmentioning
confidence: 99%
“…Stapleton et al [71] showed that the performance measures (such as BLEU, ROUGE, and METEOR), which were commonly used for the code summarization task, cannot necessarily capture how machinegenerated code summaries actually affect program comprehension of developers. Therefore, we conduct the human study by following the methodology used in previous studies [72], [73]. Since the Dual Model can achieve the best performance among all the baselines, we focus on analyzing the differences between comments generated by DualSC and the baseline Dual Model.…”
Section: B Human Study On the Shellcode Summarization Taskmentioning
confidence: 99%
“…SeCNN is proposed by Li et al [26], and uses two layer CNN to encode semantic information of the source code. One CNN is used to extract lexical information from the code tokens, and another CNN is used to extract syntactic information from the AST.…”
Section: Baselinesmentioning
confidence: 99%