Proceedings of the Fourth Workshop on Online Abuse and Harms 2020
DOI: 10.18653/v1/2020.alw-1.13
|View full text |Cite
|
Sign up to set email alerts
|

Countering hate on social media: Large scale classification of hate and counter speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 39 publications
(36 citation statements)
references
References 30 publications
0
19
0
Order By: Relevance
“…Observing civil interactions might even reduce the impact of incivility by breaking 'hate norms.' Evidence supporting this expectation comes from research showing that counter-speech can re-civilise online discourses (Garland, Ghazi-Zahedi, Young, Hébert-Dufresne, & Galesic, 2020;Ziegele, Jost, Frieß, & Naab, 2019). Overall, we formulated the following two hypotheses: H6.…”
Section: The Perceived Environmentmentioning
confidence: 99%
“…Observing civil interactions might even reduce the impact of incivility by breaking 'hate norms.' Evidence supporting this expectation comes from research showing that counter-speech can re-civilise online discourses (Garland, Ghazi-Zahedi, Young, Hébert-Dufresne, & Galesic, 2020;Ziegele, Jost, Frieß, & Naab, 2019). Overall, we formulated the following two hypotheses: H6.…”
Section: The Perceived Environmentmentioning
confidence: 99%
“…Different machine learning approaches have been applied with varied success, including but not limited to support vector machines and random forests to convolutional and recurrent neural networks (Zhang and Luo, 2019;Bosco et al, 2018;de Gibert et al, 2018b;Kshirsagar et al, 2018;Malmasi and Zampieri, 2018;Pitsilis et al, 2018;Al-Hassan and Al-Dossari, 2019;Vidgen and Yasseri, 2020;Zimmerman et al, 2018). More recently, Garland et al (2020) used an ensemble learning algorithm to classify both hate speech and counter speech in a curated collection of German messages on Twitter. Unfortunately, these approaches require labeled sets of speech to train classifiers and therefore risk not transferring from one type of harmful speech (e.g.…”
Section: Previous Workmentioning
confidence: 99%
“…Automatic detection of abusive language can help identify and report harmful accounts and acts, and allows counter narratives (Chung et al, 2019;Garland et al, 2020;Ziems et al, 2020). Due to the volume of online text and the mental impact on humans who are employed to moderate online abusive language -moderators of abusive online content have been shown to develop serious PTSD and depressive symptoms (Casey Newton, 2020)it is urgent to develop systems to automate the detection and moderation of online abusive language.…”
Section: Introductionmentioning
confidence: 99%