2020
DOI: 10.48550/arxiv.2011.14934
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Probing Model Signal-Awareness via Prediction-Preserving Input Minimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…There has also been recent interest in improving interpretability of models used in software engineering [6,8,31,41]. Two of these efforts [31,41] propose to simplify the code while retaining the model prediction.…”
Section: Interpretability Of Se Modelsmentioning
confidence: 99%
“…There has also been recent interest in improving interpretability of models used in software engineering [6,8,31,41]. Two of these efforts [31,41] propose to simplify the code while retaining the model prediction.…”
Section: Interpretability Of Se Modelsmentioning
confidence: 99%
“…Interpretability of SE models. There has also been recent interest in improving interpretability of models used in software engineering [6,8,31,41]. Two of these efforts [31,41] propose to simplify the code while retaining the model prediction.…”
Section: Related Workmentioning
confidence: 99%
“…There has also been recent interest in improving interpretability of models used in software engineering [6,8,31,41]. Two of these efforts [31,41] propose to simplify the code while retaining the model prediction. Another effort called AutoFocus [6] aims to rate and visualize the relative importance of different code elements by using a combination of attention layers in the neural network and deleting statements in the program.…”
Section: Related Workmentioning
confidence: 99%