2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) 2021
DOI: 10.1109/saner50967.2021.00039
|View full text |Cite
|
Sign up to set email alerts
|

Two-Stage Attention-Based Model for Code Search with Textual and Structural Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(28 citation statements)
references
References 45 publications
0
21
0
Order By: Relevance
“…Based on DeepCS, Shuai et al [25] proposed a co-attentive representation learning model to capture the correlations between query and code. Xu et al [31] introduced a two-stage attention-based model to learn the semantic correlation effectively and efficiently.…”
Section: Code Searchmentioning
confidence: 99%
“…Based on DeepCS, Shuai et al [25] proposed a co-attentive representation learning model to capture the correlations between query and code. Xu et al [31] introduced a two-stage attention-based model to learn the semantic correlation effectively and efficiently.…”
Section: Code Searchmentioning
confidence: 99%
“…For example, the method CARLCS-CNN [24] introduces a co-attentive mechanism on the base of DeepCS architecture; Yan. et al [29] designed a two-stage-attention-based model for their TAB-CS model. Such attention-based models have improved the primordial DeepCS to better capture the long-range relationship between tokens, thereby achieving a performance superior to DeepCS.…”
Section: Background and Related Work 21 Neural Code Searchmentioning
confidence: 99%
“…After deduplication, there are 48K records remaining. The experimental results in Section 5 demonstrate MM-SCS outperforms four state of-the-art models UNIF [7], DeepCS [10], CARLCS-CNN [24], and TAB-CS [29] by 34.2%, 59.3%, 36.8%, and 14.1% in terms of MRR, respectively. In addition, MM-SCS achieves 0.34s/query in the testing process, which is only second to UNIF.…”
Section: Introductionmentioning
confidence: 98%
See 1 more Smart Citation
“…Aimed to tackle certain issues in code search, CQIL [60] models the semantic correlations between code and query with hybrid representations. Similarly, NJACS [61], CARLCS [62], TabCS [63] and SANCS [64] learn attention-based representations of code and query with the co-attention mechanism.…”
Section: Text-based Code Searchmentioning
confidence: 99%