2021
DOI: 10.48550/arxiv.2106.08409
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Benchmark dataset of memes with text transcriptions for automatic detection of multi-modal misogynistic content

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 9 publications
0
7
0
Order By: Relevance
“…I.B: Misogynistic/Sexist: Misogyny and sexism against women have grown a foothold within social media communities, reinvigorating age-old patriarchal establishments of baseless name-calling, objectifying their appearances, and stereotyping gender roles, which has been explored in the literature [Gasparini et al, 2021]. This is especially fueled by the cryptic use of sexism disguised as humor via memes.…”
Section: Types Of Harmful Memes I: Hatementioning
confidence: 99%
See 2 more Smart Citations
“…I.B: Misogynistic/Sexist: Misogyny and sexism against women have grown a foothold within social media communities, reinvigorating age-old patriarchal establishments of baseless name-calling, objectifying their appearances, and stereotyping gender roles, which has been explored in the literature [Gasparini et al, 2021]. This is especially fueled by the cryptic use of sexism disguised as humor via memes.…”
Section: Types Of Harmful Memes I: Hatementioning
confidence: 99%
“…Several studies have focused on content and implicit offensive analogies within memes. Some leveraged unimodal [Giri et al, 2021] and multimodal information [Suryawanshi et al, 2020a], and investigating simple encoder and early fusion strategies for classifying offensive memes, while using techniques such as stacked LSTM/BiLSTM/CNN (Text) along with VGG-16 [Simonyan and Zisserman, 2015] to model multimodality, ultimately achieving an F1 score of 0.71 and accuracy of 0.50. To address contextualization, [Shang et al, 2021b] used analogyaware multimodality, by combining ResNet50 [He et al, 2016] and GloVe-based LSTM, and attentive multimodal analogy alignment via supervised learning, while incorporating contextual discourse, yielding 0.72 and 0.69 accuracy for Redditand Gab-based datasets, respectively.…”
Section: Ii: Offensive Memesmentioning
confidence: 99%
See 1 more Smart Citation
“…misogynistic-meme. An expert-labeled open misogynistic dataset (Gasparini et al, 2021), it contains 800 memes with manually transcribed text, the misogynisticDE field is used as the label for misogyny.…”
Section: Training Datasetsmentioning
confidence: 99%
“…Multi-modal hate speech detection had less attention in the research literature than traditional text-only methods. In the past two years, several datasets and challenges have addressed this by proposing detection tasks on meme-based data (Kiela et al, 2020;Gasparini et al, 2021;Miliani et al, 2020). Misogyny detection, as a subgroup of hate speech detection tasks, has also been more frequently encountered in research in recent years.…”
Section: Introductionmentioning
confidence: 99%