2021
DOI: 10.1016/j.infsof.2021.106542
|View full text |Cite
|
Sign up to set email alerts
|

Self-Attention Networks for Code Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(37 citation statements)
references
References 32 publications
0
32
0
Order By: Relevance
“…Even though the latest DEEPCS's variants, such as CARLCS-CNN [39] and SAN-CS [6] claimed better performance, they bring additional restrictions to the compatibility with our experiments. Both CARLCS-CNN [39] and SAN-CS [6] introduce the co-attention mechanism to refine the code and query representations.…”
Section: Related Workmentioning
confidence: 92%
See 2 more Smart Citations
“…Even though the latest DEEPCS's variants, such as CARLCS-CNN [39] and SAN-CS [6] claimed better performance, they bring additional restrictions to the compatibility with our experiments. Both CARLCS-CNN [39] and SAN-CS [6] introduce the co-attention mechanism to refine the code and query representations.…”
Section: Related Workmentioning
confidence: 92%
“…Even though the latest DEEPCS's variants, such as CARLCS-CNN [39] and SAN-CS [6] claimed better performance, they bring additional restrictions to the compatibility with our experiments. Both CARLCS-CNN [39] and SAN-CS [6] introduce the co-attention mechanism to refine the code and query representations. Other than generating the independeny vector representations for the code snippet and query, they compute a joint attention representation, aiming at not only catching the semantic information, but also the semantic relation between the two parts.…”
Section: Related Workmentioning
confidence: 92%
See 1 more Smart Citation
“…SuccessRate@k is widely used by many previous studies (Haldar et al, 2020;Shuai et al, 2020;Fang et al, 2021;Heyman and Cutsem, 2020). The metric is calculated as follows:…”
Section: Evaluation Metricmentioning
confidence: 99%
“…Aimed to tackle certain issues in code search, CQIL [60] models the semantic correlations between code and query with hybrid representations. Similarly, NJACS [61], CARLCS [62], TabCS [63] and SANCS [64] learn attention-based representations of code and query with the co-attention mechanism.…”
Section: Text-based Code Searchmentioning
confidence: 99%