2018
DOI: 10.1145/3236671
|View full text |Cite
|
Sign up to set email alerts
|

Avoiding bias in robot speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 3 publications
0
5
0
Order By: Relevance
“…This may be due to the fact that from its beginning the computer was socially constructed as a male domain [27,28] and that the use, liking and competence of computer technology was associated with being male [29]. The bias can be subtle, with men referred to as "computer programmer", but this is often prefaced with "female" for women, which emphasizes that it is not the norm [30]. Arguably, societal expectations on females as well as gender stereotypes have generally led to different socialization processes where women are expected to take on idealized gendered identities.…”
Section: Theoretical Backgroundmentioning
confidence: 99%
“…This may be due to the fact that from its beginning the computer was socially constructed as a male domain [27,28] and that the use, liking and competence of computer technology was associated with being male [29]. The bias can be subtle, with men referred to as "computer programmer", but this is often prefaced with "female" for women, which emphasizes that it is not the norm [30]. Arguably, societal expectations on females as well as gender stereotypes have generally led to different socialization processes where women are expected to take on idealized gendered identities.…”
Section: Theoretical Backgroundmentioning
confidence: 99%
“…Algorithms fatally overrode pilot input in the Boeing 737 Max (Mongan & Kohli, 2020). The Twitter robot Tay quickly turned racist (Hannon, 2018). Self-driving cars have been crashing (Marcus & Davis, 2019, p. 19), and US congress representatives have been identified as criminals (Levin, 2018), to mention just a few examples.…”
Section: Artificial Intelligencementioning
confidence: 99%
“…Our study is not designed as a reflection on Amazon Echo or voice interfaces-there are emerging critiques of voice assistants including discussions around their gendered or biased character [5,6] connected with concerns of inbuilt bias in the training data they draw upon. Instead, we are interested in delving deeper into how participants in the study encountered and dealt with interactional trouble.…”
Section: Studying Voice Interfaces In Usementioning
confidence: 99%