2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA) 2019
DOI: 10.1109/iisa.2019.8900669
|View full text |Cite
|
Sign up to set email alerts
|

Model-Agnostic Interpretability with Shapley Values

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(41 citation statements)
references
References 11 publications
0
41
0
Order By: Relevance
“…Shapley values provide accurate explanations, because they assign each feature an importance value for a particular prediction [31]. For example, Messalas et al [32] introduced a new metric, the top similarity method, which measures the similitude of two given explanations, produced by Shapley values, in order to evaluate the model-agnostic interpretability. Additionally, proposes a destructive method for optimizing the topology of neural networks based on the Shapley value, a game theoretic solution concept which estimates the contribution of each network element to the overall performance.…”
Section: Literature Reviewsmentioning
confidence: 99%
“…Shapley values provide accurate explanations, because they assign each feature an importance value for a particular prediction [31]. For example, Messalas et al [32] introduced a new metric, the top similarity method, which measures the similitude of two given explanations, produced by Shapley values, in order to evaluate the model-agnostic interpretability. Additionally, proposes a destructive method for optimizing the topology of neural networks based on the Shapley value, a game theoretic solution concept which estimates the contribution of each network element to the overall performance.…”
Section: Literature Reviewsmentioning
confidence: 99%
“…An SHAP chart [ 21 ] is based on Shapley values [ 20 ]. In the Shapley values theory, a prediction can be explained by assuming that each feature value of the instance is a “player” in a game, where the prediction is the payout.…”
Section: Methodsmentioning
confidence: 99%
“…Feature importance in a prediction model can be measured in various ways. Local interpretable model-agnostic explanations (LIME) [ 19 ], Shapley Values [ 20 ], and SHapley Additive ExPlanations (SHAP) [ 21 ] have been suggested to explain individual predictions. Microsoft researchers published a unified framework for machine-learning interpretability [ 21 ].…”
Section: Introductionmentioning
confidence: 99%
“…Many standard machine learning algorithms such as logistic regression, decision trees, decision-rules learning, or K-nearest neighbors are examples of more interpretable algorithms, whereas random forest, gradient boosting, support vector machine, neural networks and deep learning fall into the less-or non-interpretable machine learning approaches (i.e., black-box algorithms) (Luo et al, 2019). When a black-box model produces significantly better recommendations than a more interpretable model, the scheduling DSS developer may consider integrating feedback within the system (Kayande et al, 2009), with tools such as partial dependence (PD) plots, individual conditional expectation (ICE), local interpretable model-agnostic explanation (LIME), or kernel Shapley values (SHAP) to help partially understand the scheduling recommendation and to ensure trust and transparency in the decision process of the model (Messalas et al, 2019). On the other hand, if there are no specific design needs of relying on the mentioned blackbox methods as the main model for the DSS their capacity of exploiting non-linear relationships could still be used to derive richer features, such as the ones mentioned above.…”
Section: Scheduling Modelsmentioning
confidence: 99%