2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP) 2022
DOI: 10.1109/icse-seip55303.2022.9794112
|View full text |Cite
|
Sign up to set email alerts
|

Counterfactual Explanations for Models of Code

Abstract: Machine learning (ML) models play an increasingly prevalent role in many software engineering tasks. However, because most models are now powered by opaque deep neural networks, it can be difficult for developers to understand why the model came to a certain conclusion and how to act upon the model's prediction. Motivated by this problem, this paper explores counterfactual explanations for models of source code. Such counterfactual explanations constitute minimal changes to the source code under which the mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…What's worse, since the existing posthoc explanation approaches mainly leverage perturbation-based mechanisms (e.g., LEMNA [30]) to track input features that are highly relevant to the model's prediction, the explanation performance will deteriorate further due to the weak robustness of detection models to random perturbations. On the contrary, counterfactual explanations [18] contain the most crucial information, which constitutes minimal changes to the input under which the model changes its mind. However, just because of this, they may only cover a small subset of the ground truth.…”
Section: Why Not Existing Explainers?mentioning
confidence: 99%
“…What's worse, since the existing posthoc explanation approaches mainly leverage perturbation-based mechanisms (e.g., LEMNA [30]) to track input features that are highly relevant to the model's prediction, the explanation performance will deteriorate further due to the weak robustness of detection models to random perturbations. On the contrary, counterfactual explanations [18] contain the most crucial information, which constitutes minimal changes to the input under which the model changes its mind. However, just because of this, they may only cover a small subset of the ground truth.…”
Section: Why Not Existing Explainers?mentioning
confidence: 99%
“…Besides the attention mechanism, various explainable AI approaches have also been employed to explain NMT models of source code. For example, Cito et al [13] integrated counterfactual explanation techniques for NMT models that predict certain properties of code or code changes. Rabin et al [36] provided a model-agnostic approach to identify critical input features for models of code and demonstrated that the approach enables code simplification in code search and variable misuse debugging.…”
Section: Explainable Nmt-based Code Generationmentioning
confidence: 99%
“…Meanwhile, the security issues of these models have also become a growing concern. Recent studies [7], [8], [9], [10], [11], [12] reveal that many language models of code [13], [14], [15], [16] (a.k.a., code models) can produce opposite results for two inputs that share the same program semantics, one of which is generated by applying semanticpreserving transformations (e.g., variable renaming) to the other.…”
Section: Introductionmentioning
confidence: 99%