Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results 2018
DOI: 10.1145/3183399.3183424
|View full text |Cite
|
Sign up to set email alerts
|

Explainable software analytics

Abstract: Software analytics has been the subject of considerable recent attention but is yet to receive significant industry traction. One of the key reasons is that software practitioners are reluctant to trust predictions produced by the analytics machinery without understanding the rationale for those predictions. While complex models such as deep learning and ensemble methods improve predictive performance, they have limited explainability. In this paper, we argue that making software analytics models explainable t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
46
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 93 publications
(46 citation statements)
references
References 23 publications
0
46
0
Order By: Relevance
“…It is known that quantitative evaluation of the transparency of algorithms is challenging [30], and we cannot say our work solved the problem completely. However, deep neural inference process was black-box until now, and the experiments and discussion in this paper shows that our work moved it forward to transparency.…”
Section: Resultsmentioning
confidence: 93%
“…It is known that quantitative evaluation of the transparency of algorithms is challenging [30], and we cannot say our work solved the problem completely. However, deep neural inference process was black-box until now, and the experiments and discussion in this paper shows that our work moved it forward to transparency.…”
Section: Resultsmentioning
confidence: 93%
“…Determination of which results to consider as the best to use, since there were different results given different algorithms, was also considered. While choosing the best algorithm to perform feature selection, the importance of the predictability power of the algorithm was conceded with the ease of algorithm explain-ability [20]. While both are important, the researcher had to choose which would take precedence.…”
Section: Resultsmentioning
confidence: 99%
“…Interpretability is important for software mining and analysis in general [13]. In other domains, various techniques have been proposed to interpret machine learning results, such as by projecting outputs of CNN models through hidden neurons to input image pixels [14], by quantifying the effects of different compositions of English sentences on NLP models [15], and by perturbing inputs for black-box neural networks [16].…”
Section: Related Workmentioning
confidence: 99%