2022
DOI: 10.1177/20539517221089305
|View full text |Cite
|
Sign up to set email alerts
|

Social impacts of algorithmic decision-making: A research agenda for the social sciences

Abstract: Academic and public debates are increasingly concerned with the question whether and how algorithmic decision-making (ADM) may reinforce social inequality. Most previous research on this topic originates from computer science. The social sciences, however, have huge potentials to contribute to research on social consequences of ADM. Based on a process model of ADM systems, we demonstrate how social sciences may advance the literature on the impacts of ADM on social inequality by uncovering and mitigating biase… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 88 publications
0
7
0
Order By: Relevance
“…Two biases are often considered in practice: A so-called over-reliance bias and an algorithmic aversion bias, i.e., a complete refusal to use algorithms. 11 , 12 The process of building trust in the tool is a gradual one, involving many players. The first stage in this process of building trust has been ensured by numerous scientific publications evaluating CAD.…”
Section: Resultsmentioning
confidence: 99%
“…Two biases are often considered in practice: A so-called over-reliance bias and an algorithmic aversion bias, i.e., a complete refusal to use algorithms. 11 , 12 The process of building trust in the tool is a gradual one, involving many players. The first stage in this process of building trust has been ensured by numerous scientific publications evaluating CAD.…”
Section: Resultsmentioning
confidence: 99%
“…Thus, the algorithmic fairness discourse may be limited because in many cases, machine learning algorithms utilize the data at the point of creating the algorithm without considering the historical context in which the input data were generated (So et al, 2022). This can lead to machine learning models that "learn" to reinforce disparities that were created by seemingly race-neutral markers as objective truths, thereby legitimizing different treatments (Benjamin, 2019;Browne, 2010;Gerdon et al, 2022). Examples of this process in the domain of housing include the use of seemingly race-neutral variables, particularly risk-based pricing algorithms, such as security deposits in the rental market (Hatch, 2017), and mortgage insurance in mortgage loans (Deng and Gabriel, 2006).…”
Section: Algorithmic Reparationmentioning
confidence: 99%
“…Algorithms can also be sensitive to contextually problematic conceptualizations and depend on interactional settings. This can be highly impactful for the generation and reproduction of social inequalities as “one of the core competencies—and responsibilities—of the social sciences” (Gerdon et al, 2022 , p. 2; see also Section 4).…”
Section: Epistemological Consequences Of Digitalizationmentioning
confidence: 99%