2020
DOI: 10.1609/aaai.v34i04.5780
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Data Decompositions

Abstract: Our goal is to discover the components of a dataset, characterize why we deem these components, explain how these components are different from each other, as well as identify what properties they share among each other. As is usual, we consider regions in the data to be components if they show significantly different distributions. What is not usual, however, is that we parameterize these distributions with patterns that are informative for one or more components. We do so because these patterns allow us to c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…Hence, the regularizer tilts the optimization to prefer shorter patterns. To further push the weights to a binary solution we employ a W-shaped regularizer (Bai, Wang, and Liberty 2019;Dalleiger and Vreeken 2022), defined as…”
Section: Forward Passmentioning
confidence: 99%
“…Hence, the regularizer tilts the optimization to prefer shorter patterns. To further push the weights to a binary solution we employ a W-shaped regularizer (Bai, Wang, and Liberty 2019;Dalleiger and Vreeken 2022), defined as…”
Section: Forward Passmentioning
confidence: 99%
“…correlation [309], likelihood comparison [58] or evaluating how adding explanations to the model would improve prediction performance [253]. Le et al [144] also introduced a metric called "influence" which combines fidelity with information gain and compactness.…”
Section: Compactnessmentioning
confidence: 99%