TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON) 2019
DOI: 10.1109/tencon.2019.8929493
|View full text |Cite
|
Sign up to set email alerts
|

Multilingual Cyber Abuse Detection using Advanced Transformer Architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
20
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 33 publications
(22 citation statements)
references
References 3 publications
1
20
0
1
Order By: Relevance
“…Recent years have seen a rise in the use of transfer learning in language processing (Malte and Ratadiya, 2019a) owing to their superior performance. Models based on underlying concepts like attention mechanism and transformers are seeing widespread use across a range of tasks (Malte and Ratadiya, 2019b;Ratadiya and Mishra, 2019). Our findings concurred with this trend as we used similar kinds of architectures as the fundamental blocks in the systems for both the subtasks.…”
Section: Introductionsupporting
confidence: 85%
“…Recent years have seen a rise in the use of transfer learning in language processing (Malte and Ratadiya, 2019a) owing to their superior performance. Models based on underlying concepts like attention mechanism and transformers are seeing widespread use across a range of tasks (Malte and Ratadiya, 2019b;Ratadiya and Mishra, 2019). Our findings concurred with this trend as we used similar kinds of architectures as the fundamental blocks in the systems for both the subtasks.…”
Section: Introductionsupporting
confidence: 85%
“…Using a bidirectional transformer‐based BERT Architecture, Aditya Malte and Pratik Ratadiya 86 detected cyber abuse from Facebook multilingual texts, English, Hindi, and a mixture of both languages (Hinglish) texts. The dataset contained 25 013 multilingual comments made up of 12 000, training sets each for English and Hindi comments and a separate 916 English, 970 Hindi comments used for testing plus an additional 1257 English tweets and 1194 Hindi tweets which were used to reinforce the model's generalization ability.…”
Section: Detection Approaches and Related Workmentioning
confidence: 99%
“…A Malte et al. [25] detected Cyber abuse words in languages Hindi and English texts were analyzed using the BERT model. When compared to machine learning models, the pre-trained transformer based classification models ALBERT, BiLSTM, distilBERT, BERT, XLNet, and RoBERTa demonstrate greater accuracy.…”
Section: Related Workmentioning
confidence: 99%