2020
DOI: 10.1016/j.eswa.2020.113725
|View full text |Cite
|
Sign up to set email alerts
|

Detecting and visualizing hate speech in social media: A cyber Watchdog for surveillance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0
3

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 67 publications
(22 citation statements)
references
References 24 publications
0
19
0
3
Order By: Relevance
“…The identification of offensive content still leaves the social questions unanswered: How to react? Different approaches have been proposed; they reach from deletion [42] to labeling [43] and to counter speech by either bots [44] or humans [45]. Societies need to find strategies adequate for their specific demands.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The identification of offensive content still leaves the social questions unanswered: How to react? Different approaches have been proposed; they reach from deletion [42] to labeling [43] and to counter speech by either bots [44] or humans [45]. Societies need to find strategies adequate for their specific demands.…”
Section: Discussionmentioning
confidence: 99%
“…While research in this area has been gaining momentum [13], there is increasing evidence that social media platforms still struggle to keep up with the demand for technology, particularly for languages other than English [14]. For example, a recent article pointed out that Facebook does not have technology for identifying hate speech in the 22 official languages of India, its biggest market worldwide.…”
Section: Introductionmentioning
confidence: 99%
“…Ambiguity of the definition of offensiveness is a serious problem. This inconsistency is visible in many reviews related to automatic detection of hate speech (Fortuna and Nunes, 2018;Schmidt and Wiegand, 2017;Alrehili, 2019;Poletto et al, 2020) or more specifically on aggressiveness detection (Sadiq et al, 2021;Modha et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…2 Once candidate tweets are collected, they are assessed and labeled by human annotators into pre-specified categories. The manual annotation of the examples provides with high-quality ground truth labeled datasets, yet it is costly (Modha et al, 2020). Accordingly, the available datasets each include only a few thousands of labeled examples.…”
Section: Hate Speech Datasetsmentioning
confidence: 99%