2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS) 2022
DOI: 10.1109/icaccs54159.2022.9785067
|View full text |Cite
|
Sign up to set email alerts
|

Classification of Toxicity in Comments using NLP and LSTM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…These models provide heightened efficiency while dealing with NLP problems such as sentimental analysis, as described in the work of Jeoldar et al [86] for classification and topic discovery. Garlapati et al [87] provide a novel approach to applying NLP to detect toxic comments on social media platforms. In a similar fashion Onan et al [88] a language model for detecting sarcasm in speech with help of a Bidirectional LSTM network.…”
Section: Lstm and Blstm Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…These models provide heightened efficiency while dealing with NLP problems such as sentimental analysis, as described in the work of Jeoldar et al [86] for classification and topic discovery. Garlapati et al [87] provide a novel approach to applying NLP to detect toxic comments on social media platforms. In a similar fashion Onan et al [88] a language model for detecting sarcasm in speech with help of a Bidirectional LSTM network.…”
Section: Lstm and Blstm Modelsmentioning
confidence: 99%
“…In a similar fashion Onan et al [88] a language model for detecting sarcasm in speech with help of a Bidirectional LSTM network. Klein et al [59] Clinical epidemiology Ni et al [60] Risk analysis Tong et al [61] Medical Informatics Qin et al [62] Title and abstract screening Balyan et al [63] Text classification Multi-layer Perceptron models Heo et al [65] Stroke prediction Zeng et al [66] Medical Informatics Convolutional Neural Networks Jin et al [70] Sentiment classification Tahir et al [71] Gene promoter identification Sharma et al [72] Image captioning Recurrent Neural Networks Zhou et al [74] Pre-diagnostic service Basiri et al [75] Sentiment analysis Auto-encoders Drozdov et al [77] Syntactic parsing Yu et al [78] Answer retrieval Duong et al [79] Recommendation system Yan et al [80] Question-answering Generative Adversarial Models Yang et al [82] Text generation Wang et al [83] Disease prediction Croce et al [84] Text classification LSTM and BLSTM Models Jeoldar et al [86] Sentiment Analysis Garlapati et al [87] Text-toxicity detection Onan et al [88] Sarcasm detection BERT-based Models Xu et al [90] Group classification Gao et al [91] Sentiment analysis Nugroho et al [92] News topic classification…”
Section: Lstm and Blstm Modelsmentioning
confidence: 99%
“…Removing stop words can increase the signal-to-noise ratio in unstructured text and thus increase the statistical significance of terms that may be important for a specific task [20]. Punctuation Removal, this flag -used to divide the text into sentences, paragraphs, and phrases -affects the result of any text processing approach, especially what depends on the frequency of occurrence of words and phrases because punctuation marks are often used in the text [21]. Most text and document data sets contain many unnecessary characters, such as punctuation and special characters [22].…”
Section: Preprocessingmentioning
confidence: 99%
“…The proposed approach in the research conducted by Garlapati et al [13] also involves using a combination of feature engineering and deep learning models to classify toxic comments. To train an LSTM and CNN, the authors extract word embeddings, sentiment scores, and part-of-speech tags from the comments.…”
Section: B Deep Learning Architecturesmentioning
confidence: 99%