2018
DOI: 10.31235/osf.io/as25q
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Diagnosing Gender Bias in Image Recognition Systems

Abstract: This is a brief outline of early work in progress. We will update this document after analyzing results from a crowd sourced validation procedure]

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 1 publication
0
11
0
1
Order By: Relevance
“…For AI systems to be useful to society as a whole, their performance should not depend on the perceived skin tone or gender of the subjects -they should work equally well for all populations. However, current automatic vision system performance has been reported to vary with race, gender or when applied across different demographics and geographic regions (Buolamwini and Gebru, 2018;De Vries et al, 2019;Schwemmer et al, 2020). As a preliminary study assessing how Flamingo's performance varies between populations, we follow the study proposed in and report how the captioning performance of our model varies on COCO as a function of gender and race.…”
Section: Risks and Mitigation Strategiesmentioning
confidence: 95%
“…For AI systems to be useful to society as a whole, their performance should not depend on the perceived skin tone or gender of the subjects -they should work equally well for all populations. However, current automatic vision system performance has been reported to vary with race, gender or when applied across different demographics and geographic regions (Buolamwini and Gebru, 2018;De Vries et al, 2019;Schwemmer et al, 2020). As a preliminary study assessing how Flamingo's performance varies between populations, we follow the study proposed in and report how the captioning performance of our model varies on COCO as a function of gender and race.…”
Section: Risks and Mitigation Strategiesmentioning
confidence: 95%
“…Sono poi noti gli innovativi profili di rilevanza del principio di non discriminazione nel secondo scenario: dagli algoritmi discriminatori utilizzati da Amazon nel reclutamento (De Cesco 2018; Huffingtonpost 2018), ai rischi di discriminazione connessi alla diffusione di meccanismi di riconoscimento facciale (Schwemmer et al 2020; Buolamwini e Gebru 2018) e agli effetti di datizzazione e profilazione a essi connessi, che sono costati il posto di lavoro in Google a Timnit Gebru e a Margaret Mitchell 53 . Per poi arrivare, con riguardo al terzo scenario, quello della economia delle piattaforme (Alessi 2019), alla ormai nota vicenda della discriminazione posta in essere dall'algoritmo Frank di Deliveroo (Perulli 2021b) o alle potenziali discriminazioni derivanti da meccanismi reputazionali da parte degli utenti che lasciano aperta la possibilità di una valutazione negativa del servizio reso sulla base di elementi che non riguardano la prestazione in sé, quanto possibili pregiudizi dell'utente per profili razziali o religiosi ecc.…”
Section: Una Possibile Trappola Epistemicaunclassified
“…There also exist many research studies exploring gender bias in different types of images. By detecting gender labels of the photographs of U.S. members of Congress and their tweeted images, Schwemmer et al (2020) concluded Google Cloud Vision (GCV) could produce correct and biased labels at the same time because a subset of many possible true labels was selectively reported. Wijnhoven (2021) found a gender bias toward stereotypically female jobs for women but also for men when searching jobs via Google search engine.…”
Section: Related Workmentioning
confidence: 99%