2021
DOI: 10.48550/arxiv.2109.05074
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FBERT: A Neural Transformer for Identifying Offensive Content

Abstract: Transformer-based models such as BERT, XL-NET, and XLM-R have achieved state-of-theart performance across various NLP tasks including the identification of offensive language and hate speech, an important problem in social media. In this paper, we present fBERT, a BERT model retrained on SOLID, the largest English offensive language identification corpus available with over 1.4 million offensive instances. We evaluate fBERT's performance on identifying offensive content on multiple English datasets and we test… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…The best results were obtained with the original dataset by using LSTM-glove and achieved a 0.72 F-score. In [43] the authors devised fBert which was trained on 1.4 million offensive data from the SOLID dataset to deal with the imbalanced class problem. Compared to benchmarks, fBert outperformed with a 0.813 F-score.…”
Section: Related Workmentioning
confidence: 99%
“…The best results were obtained with the original dataset by using LSTM-glove and achieved a 0.72 F-score. In [43] the authors devised fBert which was trained on 1.4 million offensive data from the SOLID dataset to deal with the imbalanced class problem. Compared to benchmarks, fBert outperformed with a 0.813 F-score.…”
Section: Related Workmentioning
confidence: 99%