2020
DOI: 10.1109/tse.2018.2876006
|View full text |Cite
|
Sign up to set email alerts
|

Bridging Semantic Gaps between Natural Languages and APIs with Word Embedding

Abstract: Developers increasingly rely on text matching tools to analyze the relation between natural language words and APIs. However, semantic gaps, namely textual mismatches between words and APIs, negatively affect these tools. Previous studies have transformed words or APIs into low-dimensional vectors for matching; however, inaccurate results were obtained due to the failure of modeling words and APIs simultaneously. To resolve this problem, two main challenges are to be addressed: the acquisition of massive words… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 33 publications
(16 citation statements)
references
References 48 publications
0
16
0
Order By: Relevance
“…At present, in order to ensure the quality of software, many scholars explore the application of machine learning in this field. [25][26][27][28][29][30] There are also many studies focusing on software defect prediction, [31][32][33][34] which mainly apply machine learning to two parts; one is Within-Project Defect Prediction (WPDP), and the other is Cross-Project Defect Prediction (CPDP).…”
Section: Software Defect Predictionmentioning
confidence: 99%
“…At present, in order to ensure the quality of software, many scholars explore the application of machine learning in this field. [25][26][27][28][29][30] There are also many studies focusing on software defect prediction, [31][32][33][34] which mainly apply machine learning to two parts; one is Within-Project Defect Prediction (WPDP), and the other is Cross-Project Defect Prediction (CPDP).…”
Section: Software Defect Predictionmentioning
confidence: 99%
“…To evaluate the effectiveness of data-driven approach in ADM, various evaluation metrics have been widely used in ADM (Gu et al, 2016;Huang et al, 2018;Li, Jiang, et al, 2018;Raghothaman et al, 2016;Rahman et al, 2016;Yuan et al, 2019;Zhang, Niu, Keivanloo, & Zou, 2018).…”
Section: Rq5: Evaluation Metrics Used In Admmentioning
confidence: 99%
“…For example, when solving the mismatched problems, X. Ye, Shen, et al (2016) and Huang et al (2018) utilize MRR and MAP (Mean Average Precision), Rahman et al (2016) use MRR, MAP, Top@k and R@k (Recall at position k), and Li, Jiang, et al (2018) choose to use MRR, MAP, and FP (First Rank). The aforementioned approaches are evaluated by using inconsistent metrics.…”
Section: Evaluation Metrics 451 | Appropriate Evaluation Metricsmentioning
confidence: 99%
“…Specifically, we used an abstract syntax tree (AST) based model to retrieve the functional semantics of methods. The semantics of bug reports were retrieved by state-of-the-art word embedding models which have been proved to be effective in representing textual semantics in various natural language tasks [3], [6], [7]. Then, we used a deep learning model that leverages two kinds of semantic features of methods and bug reports, to learn unified features from methods and bug reports to automatically locate buggy methods for a given bug report.…”
Section: Introductionmentioning
confidence: 99%