2018
DOI: 10.1007/978-3-319-98932-7_11
|View full text |Cite
|
Sign up to set email alerts
|

Addressing Social Bias in Information Retrieval

Abstract: Journalists and researchers alike have claimed that IR systems are socially biased, returning results to users that perpetuate gender and racial stereotypes. In this position paper, I argue that IR researchers and in particular, evaluation communities such as CLEF, can and should address such concerns. Using as a guide the Principles for Algorithmic Transparency and Accountability recently put forward by the Association for Computing Machinery, I provide examples of techniques for examining social biases in IR… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 9 publications
1
3
0
Order By: Relevance
“…Together with the earlier cases of addressing skewed web search outputs that were identified by the researchers (e.g., racialized gender bias [38]), our observations support the argument of Otterbacher [39] about the importance of designing new approaches for detecting bias in IR systems. In order to be addressed, bias has first to be reliably identified, but so far there is only a few IR studies that investigate the problem in a systematic way.…”
Section: Discussionsupporting
confidence: 85%
“…Together with the earlier cases of addressing skewed web search outputs that were identified by the researchers (e.g., racialized gender bias [38]), our observations support the argument of Otterbacher [39] about the importance of designing new approaches for detecting bias in IR systems. In order to be addressed, bias has first to be reliably identified, but so far there is only a few IR studies that investigate the problem in a systematic way.…”
Section: Discussionsupporting
confidence: 85%
“…Therefore, we should have in mind that the resulting collections, including the one presented here, have the implicit bias of the social network itself, requiring analysis of demographic profiles and even treatment regarding their balance to try to minimise the implicit bias. Several authors have recently dealt with this topic from IR [16,36], with different methods for adjusting results and minimising bias [10,32]. Therefore, we recommend that both in the use of the collection and of the platform and methodology, these issues should have taken into account from the very beginning of the archival project and from a transdisciplinary perspective, an approach which is naturally taken in Computational Archival Science [26,37].…”
Section: Discussionmentioning
confidence: 99%
“…What else can we know about them? The inclusion of user profile's information in the collection also will require work on privacy issues and bias correction of the retrieval results, following innovative methods [10,21,36].…”
Section: Future Stepsmentioning
confidence: 99%
“…Biases in the information provided by AI algorithms, perpetuating gender and racial stereotypes (Otterbacher, 2018), coupled with human's tendency to process information through lenses protecting their initial beliefs and biases , favor one-sided thinking, extremism, and fanaticism, often leveraged in racist ways, and seldom to indict white supremacy (Cave & Dihal, 2020) and Western imperialism, with deleterious effects on democracy and the societal wellbeing. Several ethical issues have been acknowledged arising from AI and Big Data in the literature, including exploitation of behavioural biases, deception, and addiction generation to maximize profit (Costa & Halpern, 2019), manipulation (Helbing et al, 2019), spread of misinformation, hate speech and conspiracy theories (Scheufele & Krause, 2019).…”
Section: Introductionmentioning
confidence: 99%