2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA) 2018
DOI: 10.1109/icmla.2018.00141
|View full text |Cite
|
Sign up to set email alerts
|

Imbalanced Toxic Comments Classification Using Data Augmentation and Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 76 publications
(28 citation statements)
references
References 1 publication
0
22
0
Order By: Relevance
“…The authors use a deep learning-based toxic comments classification approach in [11] for the imbalanced toxic dataset. The performance evaluation is carried out on Kaggle Wikipedia's talk page edits dataset which contains 159,571 records of toxic comments.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The authors use a deep learning-based toxic comments classification approach in [11] for the imbalanced toxic dataset. The performance evaluation is carried out on Kaggle Wikipedia's talk page edits dataset which contains 159,571 records of toxic comments.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Ibrahim et al [49] proposed a toxic detection model based on convolutional neural network (CNN), bidirectional gated recurrent units (GRU), and bidirectional long shortterm memory (LSTM). They used a Wikipedia dataset to evaluate the proposed method and achieved an F-1-score of 87.2% for predicting toxicity types and 82.2% for the classification of toxic/non-toxic.…”
Section: Related Work 21 Applications Of User Generated Contents In Social Mediamentioning
confidence: 99%
“…Therefore, the F1-score is a special measure that can evaluate the performance of a model through the trade-off between precision and recall. Additionally, the F1-score gives a better view of ML model performance, especially for datasets with an imbalanced class distribution, because F1-score is not biased towards majority classes [50]. This study evaluated the classification performance of ML models using the F1-score based on the confusion matrix.…”
Section: The Performance Evaluation Metrics Of ML Modelsmentioning
confidence: 99%