Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) 2022
DOI: 10.18653/v1/2022.semeval-1.42
|View full text |Cite
|
Sign up to set email alerts
|

DH-FBK at SemEval-2022 Task 4: Leveraging Annotators’ Disagreement and Multiple Data Views for Patronizing Language Detection

Abstract: The subtle and typically unconscious use of patronizing and condescending language (PCL) in large-audience media outlets undesirably feeds stereotypes and strengthens power-knowledge relationships, perpetuating discrimination towards vulnerable communities. Due to its subjective and subtle nature, PCL detection is an open and challenging problem, both for computational methods and human annotators. In this paper we describe the systems submitted by the DH-FBK team to SemEval-2022 Task 4, aiming at detecting PC… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…In order to ultimately enable to take into account as many annotators’ perspectives as possible in religious hate speech detection, we decided to release disaggregated annotations in our dataset. We believe this could enable research directions in modeling different annotators’ perspectives, following successful applications in subjective tasks ( Davani, Díaz & Prabhakaran, 2022 ; Ramponi & Leonardelli, 2022 ), as well as smoothly provide valuable extensions to our dataset.…”
Section: Discussionmentioning
confidence: 94%
“…In order to ultimately enable to take into account as many annotators’ perspectives as possible in religious hate speech detection, we decided to release disaggregated annotations in our dataset. We believe this could enable research directions in modeling different annotators’ perspectives, following successful applications in subjective tasks ( Davani, Díaz & Prabhakaran, 2022 ; Ramponi & Leonardelli, 2022 ), as well as smoothly provide valuable extensions to our dataset.…”
Section: Discussionmentioning
confidence: 94%
“…Finally, we investigate whether using information about disagreement can improve offensive language detection. We employ a multi-task framework, which has already been used to include disagreement information in classification tasks Davani et al, 2022;Ramponi and Leonardelli, 2022). In a multi-task setting, the encoder component is unique and shared between both tasks that during training are jointly fine-tuned.…”
Section: Multi-task Learning With Disagreementmentioning
confidence: 99%