2022
DOI: 10.3389/frai.2022.826207
|View full text |Cite
|
Sign up to set email alerts
|

Computational Modeling of Stereotype Content in Text

Abstract: Stereotypes are encountered every day, in interpersonal communication as well as in entertainment, news stories, and on social media. In this study, we present a computational method to mine large, naturally occurring datasets of text for sentences that express perceptions of a social group of interest, and then map these sentences to the two-dimensional plane of perceived warmth and competence for comparison and interpretation. This framework is grounded in established social psychological theory, and validat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 86 publications
0
12
0
Order By: Relevance
“…Fairness -Bias. Fairness is, by far, the most discussed issue in the literature, remaining a paramount concern especially in case of LLMs and text-to-image models 35,[45][46][47] . This is sparked by training data biases propagating into model outputs 48 , causing negative effects like stereotyping 33,35 , racism 49 , sexism 50 , ideological leanings 47 , or the marginalization of minorities 51 .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Fairness -Bias. Fairness is, by far, the most discussed issue in the literature, remaining a paramount concern especially in case of LLMs and text-to-image models 35,[45][46][47] . This is sparked by training data biases propagating into model outputs 48 , causing negative effects like stereotyping 33,35 , racism 49 , sexism 50 , ideological leanings 47 , or the marginalization of minorities 51 .…”
Section: Resultsmentioning
confidence: 99%
“…Fairness is, by far, the most discussed issue in the literature, remaining a paramount concern especially in case of LLMs and text-to-image models 35,[45][46][47] . This is sparked by training data biases propagating into model outputs 48 , causing negative effects like stereotyping 33,35 , racism 49 , sexism 50 , ideological leanings 47 , or the marginalization of minorities 51 . Next to attesting generative AI a conservative inclination by perpetuating existing societal patterns 52 , there is a concern about reinforcing existing biases when training new generative models with synthetic data from previous models 53 .…”
Section: Resultsmentioning
confidence: 99%
“…As can be seen from Figure 8, in terms of the use of value, the AI uses mostly low‐value colors in its main color selection, this may be related to Midjourney's tendency to generate darker tones, 84 while the human‐designed posters and the AI‐generated posters both use mainly high‐value colors in accordance with the second and third colors. When comparing the average value of the poster colors, the AI‐generated posters are in the medium‐value area with a mean value of 0.57 (SD = 0.29) and the human‐designed posters are also in the medium‐value area with a mean value of 0.62 (SD = 0.28).…”
Section: Resultsmentioning
confidence: 99%
“…These systems were chosen because they are popular implementations of different state-of-the-art text-to-image generative AI techniques using diffusion models [12] and CLIP image embeddings [13] that were developed primarily by researchers and academics. These models have also been studied in academic research for the content nature, and cultural and social biases of their generated outputs [14,15,16,17,18,19,20]. For this experiment, Stable Diffusion's v1-4 and v2-1 pretrained weights will each be used independently in conjunction with Stable Diffusion Web UI.…”
Section: Methodology For Interviewsmentioning
confidence: 99%