Proceedings of the 2nd Workshop on Abusive Language Online (ALW2) 2018
DOI: 10.18653/v1/w18-5106
|View full text |Cite
|
Sign up to set email alerts
|

Aggression Detection on Social Media Text Using Deep Neural Networks

Abstract: In the past few years, bully and aggressive posts on social media have grown significantly, causing serious consequences for victims/users of all demographics. Majority of the work in this field has been done for English only. In this paper, we introduce a deep learning based classification system for Facebook posts and comments of Hindi-English Code-Mixed text to detect the aggressive behaviour of/towards users. Our work focuses on text from users majorly in the Indian Subcontinent. The dataset that we used f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(19 citation statements)
references
References 24 publications
0
12
0
Order By: Relevance
“…The identification of community specific keywords or the identification of hateful speech is an essential part of the pipeline for any kind of analysis on the effect of interventions on online speech. Just as there are numerous methods for the identification of hateful speech (de Gibert et al, 2018a;Park and Fung, 2017;Singh et al, 2018;Lee et al, 2018), there are numerous related methods for the identification of community-specific keywords.…”
Section: Appendixmentioning
confidence: 99%
See 1 more Smart Citation
“…The identification of community specific keywords or the identification of hateful speech is an essential part of the pipeline for any kind of analysis on the effect of interventions on online speech. Just as there are numerous methods for the identification of hateful speech (de Gibert et al, 2018a;Park and Fung, 2017;Singh et al, 2018;Lee et al, 2018), there are numerous related methods for the identification of community-specific keywords.…”
Section: Appendixmentioning
confidence: 99%
“…Online spaces often contain toxic behaviors such as abuse or harmful speech (Blackwell et al, 2017;Saleem et al, 2017;Jhaver et al, 2018;Saleem and Ruths, 2018;Habib et al, 2019;Ribeiro et al, 2020a;de Gibert et al, 2018a;Sprugnoli et al, 2018;Park and Fung, 2017;Singh et al, 2018;Lee et al, 2018). Such toxicity may result in platform-wide decreases in user participation and engagement which, combined with external pressure (e.g., bad press), may motivate platform managers to moderate harmful behavior (Saleem and Ruths, 2018;Habib et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Increasing the number features increases the difficulties in the process of feature extraction and selection. No current research uses a word's meaning or semantics for cyberbullying detection [52][53][54]. Transformers, mentioned in a few papers stated above, often require high computing resources while training, especially at regular intervals.…”
Section: Related Research Workmentioning
confidence: 99%
“…The identification of community specific keywords or the identification of hateful speech is an essential part of the pipeline for any kind of analysis on the effect of interventions on online speech. Just as there are numerous methods for the identification of hateful speech (de Gibert et al, 2018a;Singh et al, 2018;, there are numerous related methods for the identification of community-specific keywords.…”
Section: Context Sensitivity Estimation In Toxicity Detectionmentioning
confidence: 99%
“…Online spaces often contain toxic behaviors such as abuse or harmful speech (Blackwell et al, 2017;Saleem et al, 2017;Jhaver et al, 2018;Saleem and Ruths, 2018;Habib et al, 2019;Ribeiro et al, 2020a;de Gibert et al, 2018a;Singh et al, 2018;. Such toxicity may result in platform-wide decreases in user participation and engagement which, combined with external pressure (e.g., bad press), may motivate platform managers to moderate harmful behavior (Saleem and Ruths, 2018;Habib et al, 2019).…”
Section: Introductionmentioning
confidence: 99%