2019
DOI: 10.1016/j.dss.2019.113141
|View full text |Cite
|
Sign up to set email alerts
|

A scalable decision-tree-based method to explain interactions in dyadic data

Abstract: Gaining relevant insight from a dyadic dataset, which describes interactions between two entities, is an open problem that has sparked the interest of researchers and industry data scientists alike. However, the existing methods have poor explainability, a quality that is becoming essential in certain applications. We describe an explainable and scalable method that, operating on dyadic datasets, obtains an easily interpretable high-level summary of the relationship between entities. To do this, we propose a q… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 24 publications
(27 reference statements)
0
8
0
Order By: Relevance
“…Besides only very few examples (e.g. Eiras-Franco et al, 2019;Giboney et al, 2015;Martens & Provost, 2014), since then most of the publications 1 on explainability of AI systems, or "Explainable Artificial Intelligence" (XAI), have been published outside of the information systems community, mostly in computer science. As one can see, the existing IS literature is very valuable but with its peak in the 1990ies and early 2000s also comparatively dated, which motivates our call for more IS research on the explainability of AI.…”
Section: Explainability In Information Systems Researchmentioning
confidence: 99%
“…Besides only very few examples (e.g. Eiras-Franco et al, 2019;Giboney et al, 2015;Martens & Provost, 2014), since then most of the publications 1 on explainability of AI systems, or "Explainable Artificial Intelligence" (XAI), have been published outside of the information systems community, mostly in computer science. As one can see, the existing IS literature is very valuable but with its peak in the 1990ies and early 2000s also comparatively dated, which motivates our call for more IS research on the explainability of AI.…”
Section: Explainability In Information Systems Researchmentioning
confidence: 99%
“…The model used to obtain the pre-hoc explanation will consist of a grouping of the input patterns attending to their numerical variables. Clusters will be defined as the leaf nodes of a shallow decision tree [6]. Each pattern will be assigned its ADMNC estimator [5].…”
Section: Methodsmentioning
confidence: 99%
“…where NV(Cl i ) represents the number of variables needed to describe cluster Cl i and λ is a hyperparameter that allows the supervisor to balance the accuracy and interpretability [6] of the whole clustering. This quality measure is always negative and the goal of the algorithm is maximizing its value to approach 0.…”
Section: Methodsmentioning
confidence: 99%
“…Decision tree learning, which is a commonly used hierarchical structure-based ML algorithm for the regression and the classification topics (Eiras-Franco et al 2019;Gupta et al 2017;Wang et al 2019d), can be interpreted easily and fulfill all the constraints of the inherently transparent model. It has consistently stayed between various groups of transparent models.…”
Section: Decision Treesmentioning
confidence: 99%