2019
DOI: 10.1371/journal.pone.0221152
|View full text |Cite
|
Sign up to set email alerts
|

Hate speech detection: Challenges and solutions

Abstract: As online content continues to grow, so does the spread of hate speech. We identify and examine challenges faced by online automatic approaches for hate speech detection in text. Among these difficulties are subtleties in language, differing definitions on what constitutes hate speech, and limitations of data availability for training and testing of these systems. Furthermore, many recent approaches suffer from an interpretability problem—that is, it can be difficult to understand why the systems make the deci… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
238
0
11

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 362 publications
(266 citation statements)
references
References 23 publications
3
238
0
11
Order By: Relevance
“…The field has been recently surveyed in [8,9]. The vast majority of the papers analyzed in [8] describes approaches to hate speech detection based on supervised learning, where the task is treated as a sentence-or message-level binary text classification task.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The field has been recently surveyed in [8,9]. The vast majority of the papers analyzed in [8] describes approaches to hate speech detection based on supervised learning, where the task is treated as a sentence-or message-level binary text classification task.…”
Section: Related Workmentioning
confidence: 99%
“…Being computationally very expensive, researchers only recently succeeded in training BERT deep neural networks. The aforementioned survey [9] includes the more recent BERT model and introduces a modified and more transparent version of an SVM classifier that does not, however, outperform BERT.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, to address the calls for openness [103], we are making the source code 24 of the classifiers public for further research and development and for software engineers to adopt in real systems and applications.…”
Section: Key Contributionsmentioning
confidence: 99%
“…Emerging literature in the social and computational sciences increasingly adopts the view that interpretability of models is likewise important [53]. In certain cases, the complex features of state-of-the-art neural models may make it challenging for researchers to understand the predictions made.…”
Section: Hate Speech On Social Media: From Classification To Charactementioning
confidence: 99%