2020
DOI: 10.48550/arxiv.2009.05501
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems using Feature Importance Fusion

Abstract: When machine learning supports decision-making in safety-critical systems, it is important to verify and understand the reasons why a particular output is produced. Although feature importance calculation approaches assist in interpretation, there is a lack of consensus regarding how features' importance is quantified, which makes the explanations offered for the outcomes mostly unreliable. A possible solution to address the lack of agreement is to combine the results from multiple feature importance quantifie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(6 citation statements)
references
References 31 publications
(33 reference statements)
0
6
0
Order By: Relevance
“…Estimating the importance of features in ML predictive analytics is currently very unreliable. Different ML models, FI techniques, and subsets of data generate different importance coefficients, often with diverse magnitudes, for the same features [4]. These uncertainties in identifying the contribution of features to ML outputs are due to:…”
Section: A Problem Statementmentioning
confidence: 99%
See 4 more Smart Citations
“…Estimating the importance of features in ML predictive analytics is currently very unreliable. Different ML models, FI techniques, and subsets of data generate different importance coefficients, often with diverse magnitudes, for the same features [4]. These uncertainties in identifying the contribution of features to ML outputs are due to:…”
Section: A Problem Statementmentioning
confidence: 99%
“…Zhai and Chen [24] improved ensemble FI by using multiple ML models and gains in gini importance defining the final FI. Rengasamy et al [4], proposed a model agnostic ensemble FI framework to improve FI quantification using multiple models and multiple FI calculation methods. They studied several crisp fusion metrics such as mean, median, majority vote, rank correlation, combination with majority vote, and modified Thompson tau test.…”
Section: B Related Workmentioning
confidence: 99%
See 3 more Smart Citations