2023
DOI: 10.48550/arxiv.2302.02463
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Nationality Bias in Text Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…A UNESCO policy report (West et al, 2019) highlighted the problematic ways that AI perpetuates gender biases, such as using feminized voices in virtual assistants (e.g., Amazon's Alexa, Apple's Siri) that reinforces a submissive or obliging stereotype. Along with racism and sexism, AI may also perpetuate biases based on religion (Abid et al, 2021), nationality, or disability (Venkit et al, 2023).…”
Section: Ai Bias Reliability and Accuracymentioning
confidence: 99%
“…A UNESCO policy report (West et al, 2019) highlighted the problematic ways that AI perpetuates gender biases, such as using feminized voices in virtual assistants (e.g., Amazon's Alexa, Apple's Siri) that reinforces a submissive or obliging stereotype. Along with racism and sexism, AI may also perpetuate biases based on religion (Abid et al, 2021), nationality, or disability (Venkit et al, 2023).…”
Section: Ai Bias Reliability and Accuracymentioning
confidence: 99%
“…Studies are starting to investigate how cultural biases are expressed and propagated through LLMs. They highlight that language models tend to mimic the majority viewpoints found on the Internet (Venkit et al, 2023), misrepresenting those of minorities, possibly due to the use of English training data and prompts that could reduce the variability in model responses (Cao et al, 2023;Naous et al, 2023). Further evidence comes from studies that compare multi-and mono-lingual LLMs.…”
Section: Llms Are Culturally Biasedmentioning
confidence: 99%
“…Previous studies show that stereotypes and cultural biases are inherently embedded in language (Marinucci, Mazzuca, Gangemi, 2023). LLMs designers aim for universal solutions but, despite the efforts in debiasing, actual models still engage in stereotypes (Venkit et al, 2023). Having a large amount of training data does not guarantee diversity and training data need to be de-biased from sets of potentially harmful words.…”
Section: Llms Do Not Capture Individual Characteristicsmentioning
confidence: 99%
“…Regarding linguistics in particular, the abovementioned stochastic biases imply that the lexical subtleties and overall diversity found in language generated by these models will be poor. Concerns have been raised about how human biases present in the input of a model can lead to direct, indirect, or intersectional discrimination (Abid et al 2021;Venkit et al 2023), and engender AI biases after the learning phase (Wirtz et al 2019). Such discrimination is visible in the development of sexist stereotypes when AI is trained on uncontrolled open access data, where men are described as maestros and women as homemakers (Caliskan et al 2017).…”
Section: Diversitymentioning
confidence: 99%

Minds

Backus,
Cohen,
Cohn
et al. 2023
AVT