Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/46
|View full text |Cite
|
Sign up to set email alerts
|

Explaining Multi-Criteria Decision Aiding Models with an Extended Shapley Value

Abstract: The capability to explain the result of aggregation models to decision makers is key to reinforcing user trust. In practice, Multi-Criteria Decision Aiding models are often organized in a hierarchical way, based on a tree of criteria. We present an explanation approach usable with any hierarchical multi-criteria model, based on an influence index of each attribute on the decision. A set of desirable axioms are defined. We show that there is a unique index fulfilling these axioms. This new index is an extension… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(16 citation statements)
references
References 10 publications
0
16
0
Order By: Relevance
“…The third approach falls within the theory of causality and explanation in databases (Livshits and Kimelfeld 2020;Livshits, Ilyas, et al 2019;Labreuche and Fossier 2018;Meliou, Roy, and Suciu 2014). It consists of understand the underlying causes of a particular observation by determining the relative contribution of features in machine-learning predictions (Labreuche and Fossier 2018), the responsibility of tuples to database queries (Bertossi and Geerts 2020;Livshits, Bertossi, et al 2019), or the reliability of data sources (Cholvy, Perrussel, and Thévenin 2017). In (Livshits and Kimelfeld 2020), a Shapley value is used to quantify the extent to which the database violates a set of integrity constraints.…”
Section: Related Workmentioning
confidence: 99%
“…The third approach falls within the theory of causality and explanation in databases (Livshits and Kimelfeld 2020;Livshits, Ilyas, et al 2019;Labreuche and Fossier 2018;Meliou, Roy, and Suciu 2014). It consists of understand the underlying causes of a particular observation by determining the relative contribution of features in machine-learning predictions (Labreuche and Fossier 2018), the responsibility of tuples to database queries (Bertossi and Geerts 2020;Livshits, Bertossi, et al 2019), or the reliability of data sources (Cholvy, Perrussel, and Thévenin 2017). In (Livshits and Kimelfeld 2020), a Shapley value is used to quantify the extent to which the database violates a set of integrity constraints.…”
Section: Related Workmentioning
confidence: 99%
“…A more general, multi-layered form was introduced in (Ovchinnikov 2002), and later refined in (Grabisch et al 2009;Angilella et al 2013) in order to support a tree-like hierarchical structure. This hierarchical extension, referred to as HCI, enables to represent some given structure on the criteria, with better interpretability than flat CIs when the number of critera increases (Labreuche and Fossier 2018). HCIs can also be fitted with marginal attribute rescalings, called marginal utilities, for a better expressivity (Bresson et al 2020b).…”
Section: Related Workmentioning
confidence: 99%
“…A hierarchical version of the Shapley value was also established for analyzing such models (Labreuche and Fossier 2018). Finally, the normalization and monotonicity constraints are still valid, as a composition of CIs.…”
Section: Hci Modelmentioning
confidence: 99%
“…Since SHAP belongs to the same class of methods as LIME [Lundberg and Lee, 2017], the same phenomena could be at play. Labreuche and Fossier [2018] leverage Shapley values to explain the result of aggregation models for Multi-Criteria Decision Aiding. However, their solution requires full knowledge of the models involved, whereas we want to be agnostic about individual models.…”
Section: Related Workmentioning
confidence: 99%