2022
DOI: 10.1109/tse.2021.3067156
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Automatic Program Repair Capabilities to Repair API Misuses

Abstract: API misuses are well-known causes of software crashes and security vulnerabilities. However, their detection and repair is challenging given that the correct usages of (third-party) APIs might be obscure to the developers of client programs. This paper presents the first empirical study to assess the ability of existing automated bug repair tools to repair API misuses, which is a class of bugs previously unexplored. Our study examines and compares 14 Java test-suite-based repair tools (11 proposed before 2018,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 85 publications
1
10
0
Order By: Relevance
“…Moreover, most of our evaluation considers only a small set of 37 real API misuses from MUBench. While specifically preprocessed to reduce potential bias (e.g., removing duplicates), future work may validate our results on larger datasets of API misuses such similar to ours on the AU500 dataset (Kang and Lo 2021) as well as recently published ones (Nielebock et al 2021;Kechagia et al 2021).…”
Section: Threats To Validitysupporting
confidence: 62%
See 1 more Smart Citation
“…Moreover, most of our evaluation considers only a small set of 37 real API misuses from MUBench. While specifically preprocessed to reduce potential bias (e.g., removing duplicates), future work may validate our results on larger datasets of API misuses such similar to ours on the AU500 dataset (Kang and Lo 2021) as well as recently published ones (Nielebock et al 2021;Kechagia et al 2021).…”
Section: Threats To Validitysupporting
confidence: 62%
“…As it has been shown by previous work, other metrics could further improve the results (Le and Lo 2015;Amann 2018). Additionally, recent research has come up with potential further datasets for API misuses (Nielebock et al 2021;Kechagia et al 2021), which provide a more diverse set of validation data.…”
Section: Misusementioning
confidence: 99%
“…Our work aligns with different comparative studies on techniques in the automated software-engineering domain, such as automated program repair in general [17], automated repair of API misuses [27], static code analysis for detecting security vulnerabilities [22], fault localization techniques [69], or the performance of API misuse detectors [4]. To the best of our knowledge, no previous work exists that evaluates graphdistance algorithms for API usages comparisons.…”
Section: Studies On Automated Software-engineering Techniquesmentioning
confidence: 57%
“…Finally, the MUBench dataset may not be a representative set of API misuses, which could cause their limited applicability to detect misuses in the AU500 dataset. Thus, other datasets should be researched, such as the ones provided by Kechagia et al [27] or ourselves [46].…”
Section: B External Validitymentioning
confidence: 99%
“…We found 3 datasets that met our criteria: 1) DDR by He et al [8], which contains patches from 14 repair systems. He et al [8] classified patches using a technique called RGT, which generates new test cases using a groundtruth, human-written oracle patches; 2) one dataset by Liu et al [22], which includes patches from 16 repair systems, and manually evaluated the correctness using guidelines presented by Liu et al [22]; 3) APIRepBench by Kechagia et al [23], which includes patches from 14 repair tools. The patches were manually assessed.…”
Section: Datasetmentioning
confidence: 99%