2018
DOI: 10.1007/s41870-018-0157-5
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing semantic LSTM for spam detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
48
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 87 publications
(52 citation statements)
references
References 19 publications
1
48
0
2
Order By: Relevance
“…Table 3 and Fig. 10 provide a contrast between RMDL's accuracy and the best accuracy in the four deep learning articles [8][9][10][11] listed in the related work section. Using complicated 3CNN architecture in [8], the best accuracy was achieved.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Table 3 and Fig. 10 provide a contrast between RMDL's accuracy and the best accuracy in the four deep learning articles [8][9][10][11] listed in the related work section. Using complicated 3CNN architecture in [8], the best accuracy was achieved.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…In addition, some research has been presented in [8][9][10][11] on deep learning approaches for the detection of SMS spam. Using text information only, CNN and LSTM were tested in [8].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The main purpose of this analysis is to infer a latent attribute of such user. Several works have been done on a single and specific perspective such as organization detection [1,3,4,5], bot detection [2,6,7,8,9], political orientation [10], age prediction [11] etc. Twitter user classification approaches involve three major types, mainly statistical-based approaches, content-based approaches and hybrid-based approaches.…”
Section: Related Workmentioning
confidence: 99%
“…They used parameters from different linguistic aspects of tweet content, this yielded F1-measure result of 89.20% for the English tweets. In [7], the authors classify tweets belonging to humans or bots. They used word2vec, Wordnet and Conceptnet to create a semantic word vector, which is used later by the Long Short Term Memory (LSTM) for the classification task.…”
Section: Related Workmentioning
confidence: 99%