Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.373
|View full text |Cite
|
Sign up to set email alerts
|

He said “who’s gonna take care of your children when you are at ACL?”: Reported Sexist Acts are Not Sexist

Abstract: In a context of offensive content mediation on social media now regulated by European laws, it is important not only to be able to automatically detect sexist content but also to identify if a message with a sexist content is really sexist or is a story of sexism experienced by a woman. We propose: (1) a new characterization of sexist content inspired by speech acts theory and discourse analysis studies, (2) the first French dataset annotated for sexism detection, and (3) a set of deep learning experiments tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 24 publications
(30 citation statements)
references
References 43 publications
0
21
0
1
Order By: Relevance
“…quotation signs indicate reported speech; a tweet may report an abusive remark, however, the reported remark itself may not be perceived as abusive (Chiril et al, 2020) presence of exclamation sign? a typical means of expressing high emotional intensity improve performance.…”
Section: Results Of Disambiguationmentioning
confidence: 99%
“…quotation signs indicate reported speech; a tweet may report an abusive remark, however, the reported remark itself may not be perceived as abusive (Chiril et al, 2020) presence of exclamation sign? a typical means of expressing high emotional intensity improve performance.…”
Section: Results Of Disambiguationmentioning
confidence: 99%
“…Given that social-media platforms commonly used for obtaining natural language data, such as Twitter, increasingly ban abusive language on their sites 7 , the amount of data available in which abusive language is actually used is decreasing. 8 However, there are still many mentions of abuse available, such as reported cases (Chiril et al, 2020), including implicit abuse ( 51)-( 52). For example, we randomly sampled 50 tweets from Twitter containing the abusive clause homosexuality is unnatural.…”
Section: Classification Below the Micropost-levelmentioning
confidence: 99%
“…There are also a few notable neural network techniques: LSTM (Jha and Mamidi, 2017) or CNN+GRU (Zhang and Luo, 2018). Chiril et al (2020b) use a BERT model trained on word embeddings, linguistic features and generalization strategies to distinguish reports/denunciations of sexism from real sexist content that are directly addressed to a target.…”
Section: Sexist Hate Speech Detectionmentioning
confidence: 99%
“…FlauBERT lex /BERT lex . In order to force the classifier to learn from generalized concepts rather than words which may be rare in the corpus, we adopt several replacement combinations extending Badjatiya et al (2017)'s andChiril et al (2020b)'s approach. We used a publicly avail-able French lexicon comprising 130 gender stereotyped words 11 that we grouped according to our 3 categories (physical characteristics, behavioural characteristics, activities) and replaced these words/expressions when present in tweets by their category.…”
Section: Modelsmentioning
confidence: 99%