The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.1007/s10994-021-05981-0
|View full text |Cite
|
Sign up to set email alerts
|

Can metafeatures help improve explanations of prediction models when using behavioral and textual data?

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(13 citation statements)
references
References 54 publications
0
8
0
Order By: Relevance
“…Consequently, it is often difficult-if not impossible-to understand how classifications were made when using nonlinear models without relying on interpretation techniques like the ones we use in this study. Even for linear models or decision trees, it can be challenging to gain meaningful insights into how classifications are made, because of the high dimensional and sparse nature of behavioral data [14][15][16][17]. For example, if we want to predict people's personality based on the Facebook pages they 'like', a user is represented by a binary feature for every page, which results in an enormous feature space.…”
Section: Ai As a Black Boxmentioning
confidence: 99%
See 3 more Smart Citations
“…Consequently, it is often difficult-if not impossible-to understand how classifications were made when using nonlinear models without relying on interpretation techniques like the ones we use in this study. Even for linear models or decision trees, it can be challenging to gain meaningful insights into how classifications are made, because of the high dimensional and sparse nature of behavioral data [14][15][16][17]. For example, if we want to predict people's personality based on the Facebook pages they 'like', a user is represented by a binary feature for every page, which results in an enormous feature space.…”
Section: Ai As a Black Boxmentioning
confidence: 99%
“…We use rule extraction as a global method to gain insight into the classification models. Rule extraction has been proposed in the literature to generate explanations by distilling a comprehensible set of rules (hereafter 'explanation rules') from a complex classification model C M [15,[37][38][39]. Rule extraction is based on surrogate modeling of which the goal is to use an interpretable model to approximate the predictions of a more complex model Ŷ.…”
Section: Rules As Global Explanationsmentioning
confidence: 99%
See 2 more Smart Citations
“…We find that counterfactual explanations, despite being often favored by XAI researchers [35,36,13,40], are in general less popular. Consistent with our predictions, we find that aversion to counterfactual explanations is reduced when these explanations are provided following a negative decision (e.g., loan application was denied).…”
Section: Introductionmentioning
confidence: 62%