2017
DOI: 10.1007/s41060-017-0088-4
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning for detecting inappropriate content in text

Abstract: Today, there are a large number of online discussion fora on the internet which are meant for users to express, discuss and exchange their views and opinions on various topics. For example, news portals, blogs, social media channels such as youtube. typically allow users to express their views through comments. In such fora, it has been often observed that user conversations sometimes quickly derail and become inappropriate such as hurling abuses, passing rude and discourteous comments on individuals or certai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 64 publications
(31 citation statements)
references
References 17 publications
(22 reference statements)
0
29
0
1
Order By: Relevance
“…They claimed that their model can achieve an extremely high accuracy exceeding 0.96 AUC. Yenala et al (2018) identified that the automatic detection and filtering of inappropriate messages or comments have become an important problem for improving the quality of conversations with users as well as virtual agents. They proposed a novel hybrid DL model to automatically identify the inappropriate language.…”
Section: Hybrid Model For Detecting Misinformationmentioning
confidence: 99%
See 2 more Smart Citations
“…They claimed that their model can achieve an extremely high accuracy exceeding 0.96 AUC. Yenala et al (2018) identified that the automatic detection and filtering of inappropriate messages or comments have become an important problem for improving the quality of conversations with users as well as virtual agents. They proposed a novel hybrid DL model to automatically identify the inappropriate language.…”
Section: Hybrid Model For Detecting Misinformationmentioning
confidence: 99%
“…However, the automated detection of misinformation is difficult to accomplish as it requires the advanced model to understand how related or unrelated the reported information is when compared to real information (Wu et al 2019). Also, to solve many complex MID problems, academia and industry researchers have applied DL to a large number of applications to make decisions (Xu et al 2019;Yenala et al 2018;Yin et al 2020). Therefore, this survey seeks to provide such a systematic review of current research on MID based on DL techniques.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Then they have applied the multi-step Deep learning methods CNN+GRU(LSTM) followed by a dense layer with different embedding techniques like GloVe [11], word2vec [13] etc., for feature learning and classification. Yenala H et al [9] have worked on detecting inappropriate text in web search queries and chat data using Deep Learning Techniques. They have used Convolution-BiLSTM and BiLSTM respectively for their experiments.…”
Section: Related Workmentioning
confidence: 99%
“…In our experiments, we have introduced the concept called Hybrid Features, which are learned from deep learning models and are used with other semantic features [9][25] [28] like Parts of Speech (POS), Term Frequency -Inverse Document Frequency(TF-IDF), and tweet specific syntactic features for an effective feature representation. We have explored different kinds of neural networks for improving feature learning, starting with single architecture model like Convolution Neural Network (CNN), Long Short Term Memory (LSTM), Bidirectional LSTM (BiLSTM) and hybrid models like Convolution Neural Network + Long Short Term Memory (CNN+LSTM), Convolution Neural Network + Gated Recurrent Unit (CNN+GRU) and also focused on improving preprocessing steps to reduce the number of missing embeddings and to increase the vocabulary for efficient feature learning.…”
Section: Introductionmentioning
confidence: 99%