2019
DOI: 10.1007/978-3-030-20521-8_57
|View full text |Cite
|
Sign up to set email alerts
|

On Transfer Learning for Detecting Abusive Language Online

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 11 publications
0
7
0
Order By: Relevance
“…Overfitting can be reduced through training on more than one dataset ( Waseem, Thorne & Bingel, 2018 ; Karan & Šnajder, 2018 ) or transfer learning from a larger dataset ( Uban & Dinu, 2019 ; Alatawi, Alhothali & Moria, 2020 ) and/or a closely related task, such as sentiment analysis ( Uban & Dinu, 2019 ; Cao, Lee & Hoang, 2020 ), yet synthesis in the literature is lacking. More work can be done on comparing different training approaches, and what characteristics of the datasets interact with the effectiveness.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Overfitting can be reduced through training on more than one dataset ( Waseem, Thorne & Bingel, 2018 ; Karan & Šnajder, 2018 ) or transfer learning from a larger dataset ( Uban & Dinu, 2019 ; Alatawi, Alhothali & Moria, 2020 ) and/or a closely related task, such as sentiment analysis ( Uban & Dinu, 2019 ; Cao, Lee & Hoang, 2020 ), yet synthesis in the literature is lacking. More work can be done on comparing different training approaches, and what characteristics of the datasets interact with the effectiveness.…”
Section: Discussionmentioning
confidence: 99%
“…Research on transfer learning from other tasks , such as sentiment analysis, also lacks consistency. Uban & Dinu (2019) pre-trained a classification model on a large sentiment dataset ( https://help.sentiment140.com/ ), and performed transfer learning on the OLID and Kumar datasets. They took pre-training further than the embedding layer, comparing word2vec ( Mikolov et al, 2013 ) to sentiment embeddings and entire-model transfer learning.…”
Section: Obstacles To Generalisable Hate Speech Detectionmentioning
confidence: 99%
“…Sahi et al [20] have investigated the automatic detection of hate towards women on Twitter. There exist further studies, which focus on a specific problem, e.g., detecting abusive language [21] or the risks of racial biases in hate speech detection [22]. To automatically detect hate speech, different approaches are used.…”
Section: Challenges and Approaches For Automated Hate Speech Detectionmentioning
confidence: 99%
“…Research on transfer learning from other tasks, such as sentiment analysis, also lacks consistency. Uban and Dinu (2019) pre-trained a classification model on a large sentiment dataset 2 , and performed transfer learning on the Zampieri and Kumar datasets. They took pre-training further than the embedding layer, comparing word2vec (Mikolov et al, 2013) to sentiment embeddings and entire-model transfer learning.…”
Section: Existing Solutionsmentioning
confidence: 99%