Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science 2020
DOI: 10.18653/v1/2020.nlpcss-1.23
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing Gender Bias within Narrative Tropes

Abstract: Popular media reflects and reinforces societal biases through the use of tropes, which are narrative elements, such as archetypal characters and plot arcs, that occur frequently across media. In this paper, we specifically investigate gender bias within a large collection of tropes. To enable our study, we crawl tvtropes.org, an online user-created repository that contains 30K tropes associated with 1.9M examples of their occurrences across film, television, and literature. We automatically score the "gendered… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 13 publications
(13 reference statements)
0
13
0
Order By: Relevance
“…Prompts with the same content can still lead to different narratives that are tied to character gender, suggesting that GPT-3 has internally linked stereotypical contexts to gender. In previous work, GPT-3's predecessor GPT-2 also places women in caregiving roles (Kirk et al, 2021), and character tropes for women emphasize maternalism and appearance (Gala et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Prompts with the same content can still lead to different narratives that are tied to character gender, suggesting that GPT-3 has internally linked stereotypical contexts to gender. In previous work, GPT-3's predecessor GPT-2 also places women in caregiving roles (Kirk et al, 2021), and character tropes for women emphasize maternalism and appearance (Gala et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
“…Now, we measure how much descriptions of characters correspond to a few established gender stereotypes. Men are often portrayed as strong, intelli-gent, and natural leaders (Smith et al, 2012;Fast et al, 2016b;Gala et al, 2020).…”
Section: Lexicon-based Stereotypesmentioning
confidence: 99%
“…We modify BERT embeddings in this project. Pretrained language models such as BERT are known to produce embeddings that raise ethical concerns such as gender (Gala et al, 2020) and racial biases (Merullo et al, 2019;Bommasani et al, 2020), and can also output other offensive text content. Practitioners may consider employing a post-processing step to filter out potentially offensive content before releasing the final output.…”
Section: Ethical Considerationsmentioning
confidence: 99%
“…Recently, Chiril et al (2020) developed a dataset for sexism detection in French tweets. While the study of sexism in TV shows has received little attention in natural language processing Lee et al (2019b), Gala et al (2020), Xu et al (2019), it has received significant attention in the field of gender studies (Sink and Mastro, 2017;Glascock, 2003). In gender studies, Sink and Mastro (2017) conducted a quantitative analysis to document portrayals of women and men on prime-time television and Glascock (2003) examines the perception of gender roles on network prime-time television programming.…”
Section: Related Workmentioning
confidence: 99%