2021
DOI: 10.1111/ijsa.12325
|View full text |Cite
|
Sign up to set email alerts
|

Spare me the details: How the type of information about automated interviews influences applicant reactions

Abstract: This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 25 publications
(27 citation statements)
references
References 51 publications
(94 reference statements)
2
25
0
Order By: Relevance
“…Accordingly, such an explanation-based taxonomy could help people from these disciplines to get started with XAI. For instance, utilizing such a taxonomy, psychologists could conduct studies to find out which kinds of explanations are best suited for certain contexts (see, e.g., [42,43,71] for such kinds of studies in hiring scenarios).…”
Section: Taxonomies For Scholars From Outside Of Computer Sciencementioning
confidence: 99%
“…Accordingly, such an explanation-based taxonomy could help people from these disciplines to get started with XAI. For instance, utilizing such a taxonomy, psychologists could conduct studies to find out which kinds of explanations are best suited for certain contexts (see, e.g., [42,43,71] for such kinds of studies in hiring scenarios).…”
Section: Taxonomies For Scholars From Outside Of Computer Sciencementioning
confidence: 99%
“…For instance, one can expect this to be the case in the areas of health (Juravle et al, 2020), but also in other settings where it is not the individual but the state who uses AI such as in tax fraud detection (Kieslich et al, 2021) or in the criminal justice system (Waggoner et al, 2019). Similarly, transparency may be valued more where individuals are not the users of a service but the object, as in recruiting decisions (Langer et al, 2021). The value of transparency may thus well rise proportional to the severity of the impacts that individual AI system decisions potentially have.…”
Section: Discussionmentioning
confidence: 99%
“…In fact, both quantity and quality of explanations matter: Kulesza et al [31] explored the effects of soundness and completeness of explanations on end users' mental models and suggest, among others, that oversimplification is problematic. Recent findings from Langer et al [34], on the other hand, suggest that in certain cases it might make sense to withhold pieces of information in order to not evoke negative reactions. Either way, even in the presence of explanations, people sometimes rely too heavily on system suggestions [5], a phenomenon sometimes referred to as automation bias [13,19].…”
Section: Key Related Work and Research Gapsmentioning
confidence: 99%