Proceedings of the Third Workshop on Abusive Language Online 2019
DOI: 10.18653/v1/w19-3508
|View full text |Cite
|
Sign up to set email alerts
|

Pay “Attention” to your Context when Classifying Abusive Language

Abstract: The goal of any social media platform is to facilitate healthy and meaningful interactions among its users. But more often than not, it has been found that it becomes an avenue for wanton attacks. We propose an experimental study that has three aims: 1) to provide us with a deeper understanding of current datasets that focus on different types of abusive language, which are sometimes overlapping (racism, sexism, hate speech, offensive language and personal attacks); 2) to investigate what type of attention mec… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(28 citation statements)
references
References 14 publications
2
26
0
Order By: Relevance
“…Centering the analysis of results on the first three baselines and on our classification framework (columns 2 -5), the results indicate that the use of AM outperformed the base Bi-GRU network (column 2 vs columns 3 -5) by at least a margin of 1.1%. In addition, the use of the CA outperformed the use of SA (column 4 vs column 3) by at least a margin of 1.2%, which is consistent according to the results obtained in (Chakrabarty et al, 2019). Finally, comparing the use of our proposed SCA mechanism against the use of SA and CA (column 5 vs columns 3 and 4), better results are obtained in the four evaluation datasets, improving the results by at least a margin of 1.1%.…”
Section: Quantitative Effectiveness Of the Sca Mechanismsupporting
confidence: 89%
See 3 more Smart Citations
“…Centering the analysis of results on the first three baselines and on our classification framework (columns 2 -5), the results indicate that the use of AM outperformed the base Bi-GRU network (column 2 vs columns 3 -5) by at least a margin of 1.1%. In addition, the use of the CA outperformed the use of SA (column 4 vs column 3) by at least a margin of 1.2%, which is consistent according to the results obtained in (Chakrabarty et al, 2019). Finally, comparing the use of our proposed SCA mechanism against the use of SA and CA (column 5 vs columns 3 and 4), better results are obtained in the four evaluation datasets, improving the results by at least a margin of 1.1%.…”
Section: Quantitative Effectiveness Of the Sca Mechanismsupporting
confidence: 89%
“…One of the first works introducing attention into the task used the SA mechanism to detect abuse in portal news and Wikipedia (Pavlopoulos et al, 2017). Subsequently, (Chakrabarty et al, 2019) showed that the use of CA introduced by (Yang et al, 2016) improved the results of SA in this task. Later in (Jarquín- the use of the CA is extended at a word n-grams level, showing the advantages in the usage of word sequences when identifying AL.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Most works propose variations on neural architectures such as Recurrent Neural Networks (especially Long Shortterm Memory networks), or Convolutional Neural Networks (Mishra et al, 2019). An investigation on what type of attention mechanism (contextual vs. self-attention) is better for abusive language detection using deep learning architectures is proposed in (Chakrabarty et al, 2019). Character-based models have also been proposed for this task .…”
Section: Related Workmentioning
confidence: 99%