2019
DOI: 10.1007/978-3-030-13463-1_6
|View full text |Cite
|
Sign up to set email alerts
|

ICIE 1.0: A Novel Tool for Interactive Contextual Interaction Explanations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…7 B, it would be more meaningful to average over the upward and downward proceeding ICE curves separately and hence show that the average influence of feature X 2 on the target depends on an interacting feature (here: X 3 ). Work by Zon et al [125] followed a similar idea by proposing an interactive visualization tool to group Shapley values with regards to interacting features that need to be defined by the user.…”
Section: Misleading Feature Effects Due To Aggregationmentioning
confidence: 99%
“…7 B, it would be more meaningful to average over the upward and downward proceeding ICE curves separately and hence show that the average influence of feature X 2 on the target depends on an interacting feature (here: X 3 ). Work by Zon et al [125] followed a similar idea by proposing an interactive visualization tool to group Shapley values with regards to interacting features that need to be defined by the user.…”
Section: Misleading Feature Effects Due To Aggregationmentioning
confidence: 99%
“…Contexts are also useful to improve the interpretability of the ML models. Zon et al [63] proposed the interactive contextual interaction explanation (ICIE) framework that allows users to view explanations of each instance under different contexts. In ICIE, a context can be defined as a set of constraints to describe a subspace of the feature space.…”
Section: Context-aware Machine Learningmentioning
confidence: 99%
“…Different from the contexts defined in the supervised learning studies [35] which define contexts or contextual features based on their impact on the importance of other predictive features, contexts in this research are defined in a purely unsupervised manner. Similar to [63], we define a context T k as a set of constraints to describe a subspace of the feature space. Based on those constraints, we can decide whether an instance is covered by this context or not.…”
Section: Extracting Interpretable Contextsmentioning
confidence: 99%
“…To further investigate PSORisk explanatory capability, we have conducted extra experiments to compare the interpretability of our risk scores and SHAP [10]. For these experiments, we use extreme gradient boosting (XGBoost) as the black-box classification algorithm and a SHAP explainer is used to show the impact of each feature on the model outputs.…”
Section: A Interpretability Of Obtained Risk Scoresmentioning
confidence: 99%
“…These algorithms focus on building ML models which can be easily interpreted by decision makers while achieving a strong prediction accuracy. IML is particularly popular in risk assessment problems in a wide range of critical domains such as medicine [8], [9] and finance [10]. In these problems, users are not only interested in calculating the risk but also required to understand how the risks are determined in order to provide appropriate explanations and recommendations to the parties involved.…”
Section: Introductionmentioning
confidence: 99%