2021
DOI: 10.48550/arxiv.2111.05711
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Counterfactual Explanations for Models of Code

Abstract: Machine learning (ML) models play an increasingly prevalent role in many software engineering tasks. However, because most models are now powered by opaque deep neural networks, it can be difficult for developers to understand why the model came to a certain conclusion and how to act upon the model's prediction. Motivated by this problem, this paper explores counterfactual explanations for models of source code. Such counterfactual explanations constitute minimal changes to the source code under which the mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 32 publications
(45 reference statements)
0
1
0
Order By: Relevance
“…Regarding ML models for program repair, Noller et al surveyed developers and discovered that not only do most current program repair tools not produce high-quality patches but also developers do not seem interested in human-in-the-loop interaction to collaboratively develop better patches [20]. Regarding summarizing code diffs, Cito et al develop a method for producing counterfactual explanations for models of code [5]. The focus of this work is models for automated code review that predict some metric such as performance given a code diff.…”
Section: Explainable Models Of Codementioning
confidence: 99%
“…Regarding ML models for program repair, Noller et al surveyed developers and discovered that not only do most current program repair tools not produce high-quality patches but also developers do not seem interested in human-in-the-loop interaction to collaboratively develop better patches [20]. Regarding summarizing code diffs, Cito et al develop a method for producing counterfactual explanations for models of code [5]. The focus of this work is models for automated code review that predict some metric such as performance given a code diff.…”
Section: Explainable Models Of Codementioning
confidence: 99%