2021
DOI: 10.1609/aaai.v35i6.16721
|View full text |Cite
|
Sign up to set email alerts
|

If You Like Shapley Then You’ll Love the Core

Abstract: The prevalent approach to problems of credit assignment in machine learning -- such as feature and data valuation -- is to model the problem at hand as a cooperative game and apply the Shapley value. But cooperative game theory offers a rich menu of alternative solution concepts, which famously includes the core and its variants. Our goal is to challenge the machine learning community's current consensus around the Shapley value, and make a case for the core as a viable alternative. To that end, we prove that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(3 citation statements)
references
References 19 publications
(15 reference statements)
0
3
0
Order By: Relevance
“…This thus indicates that all the existing data Shapley dependent solutions are computationally intractable for our problem. Although there have been many recent attempts to approximately but efficiently compute data Shapley values, e.g., [Jia et al 2019, Yan andProcaccia 2021, Jia et al], they are still far from being practical solutions since they either explicitly assume that the models have certain properties, which may not hold for general neural nets (e.g., ), or still require repetitive training (e.g., [Yan and Procaccia 2021]). As a consequence, it is still an open challenge to efficiently compute data Shapley values for each sample in large datasets for general neural nets [Sim et al 2022].…”
Section: Discussionmentioning
confidence: 99%
“…This thus indicates that all the existing data Shapley dependent solutions are computationally intractable for our problem. Although there have been many recent attempts to approximately but efficiently compute data Shapley values, e.g., [Jia et al 2019, Yan andProcaccia 2021, Jia et al], they are still far from being practical solutions since they either explicitly assume that the models have certain properties, which may not hold for general neural nets (e.g., ), or still require repetitive training (e.g., [Yan and Procaccia 2021]). As a consequence, it is still an open challenge to efficiently compute data Shapley values for each sample in large datasets for general neural nets [Sim et al 2022].…”
Section: Discussionmentioning
confidence: 99%
“…Follow-up work has explored other explanation methods derived from the concept of Shapley value and cooperative game theory, for example, integrated gradients (Sundararajan et al, 2017), Shapley values for individual neurons (Ghorbani & Zou, 2020), or the least core (Yan & Procaccia, 2021), based on a different solution concept. Rozemberczki et al (2022) provide an in-depth overview of the cooperative game theory and numerous applications of the Shapley value in machine learning.…”
Section: Shapmentioning
confidence: 99%
“…We provide a brief overview of cooperative game theory (examined in details in various books [45,12]) and discuss how solution concepts in cooperative game theory have been applied in Explainable AI [15,35,36,14,56].…”
Section: Preliminariesmentioning
confidence: 99%