2022
DOI: 10.1007/s10676-022-09623-4
|View full text |Cite
|
Sign up to set email alerts
|

Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system

Abstract: In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users' trust and fairness perception towards the system, regardless of its actual fairness, which can be measu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(20 citation statements)
references
References 33 publications
(60 reference statements)
0
9
0
Order By: Relevance
“…Another set of studies looked at explanations for a decision as a critical aspect of perceived fairness. Explanations for an algorithmic decision significantly increased respondents' perceptions of fairness in several studies (Binns et al, 2018; Dodge et al, 2019; Shin, 2021; Shulner-Tal et al, 2022). However, the results are very nuanced.…”
Section: Resultsmentioning
confidence: 98%
See 1 more Smart Citation
“…Another set of studies looked at explanations for a decision as a critical aspect of perceived fairness. Explanations for an algorithmic decision significantly increased respondents' perceptions of fairness in several studies (Binns et al, 2018; Dodge et al, 2019; Shin, 2021; Shulner-Tal et al, 2022). However, the results are very nuanced.…”
Section: Resultsmentioning
confidence: 98%
“…Moreover, several studies showed that different explanation styles (e.g. case-based , sensitivity-based , demographic-based , input influence-based ) affected respondents' perceived fairness differently (Binns et al, 2018; Dodge et al, 2019; Schoeffer et al, 2021; Shulner-Tal et al, 2022).…”
Section: Resultsmentioning
confidence: 99%
“…Dodge et al [10] find that people perceive global and local explanations differently, but also conclude that the effect of explanations depends on "the kinds of fairness issues and user profiles." Similarly, Shulner-Tal et al [34] found that some explanations "are more beneficial than others," but perceptions mainly depend on "the outcome of the system." B: XAI and reliance on AI Another set of works have examined how explanations may impact people's reliance on AI.…”
Section: Takeawaysmentioning
confidence: 97%
“…Prior findings are inconclusive However, as of today, there is no conclusive empirical evidence showing that explanations facilitate human-AI complementarity. Prior work has found that explanations can influence people's fairness perceptions towards AI models and their predictions in positive or negative ways (e.g., [4,10,25,34]). Other findings suggest that explanations may (e.g., [7,24]) or may not (e.g., [1,3,18,32]) lead to enhanced human-AI performance.…”
Section: Introductionmentioning
confidence: 99%
“… 43 See the experiments on human-in-the-loop evaluations of explanations of different styles here: (Shulner-Tal et al, 2022 ). …”
mentioning
confidence: 99%