2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA) 2018
DOI: 10.1109/iccubea.2018.8697601
|View full text |Cite
|
Sign up to set email alerts
|

Word Embedding Based Multinomial Naive Bayes Algorithm for Spam Filtering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…There are many models of NB e-mail filters [26], but we examined 3 models: Multinomial Naïve Bayes algorithm (MNB), Complement Naïve Bayes algorithm (CNB), and Bernoulli Naïve Bayes algorithm (BNB). MNB models [27] are mainly suitable to deal with discrete features. It considers the relationship between word frequencies in the text and the category of the text.…”
Section: Naïve Bayes Filtersmentioning
confidence: 99%
“…There are many models of NB e-mail filters [26], but we examined 3 models: Multinomial Naïve Bayes algorithm (MNB), Complement Naïve Bayes algorithm (CNB), and Bernoulli Naïve Bayes algorithm (BNB). MNB models [27] are mainly suitable to deal with discrete features. It considers the relationship between word frequencies in the text and the category of the text.…”
Section: Naïve Bayes Filtersmentioning
confidence: 99%
“…By calculating the likelihood of each class given a set of input features, MNB efficiently assigns probabilities to different categories, thus facilitating accurate classification. Despite its simplicity, MNB has demonstrated robust performance in various natural language processing applications, including sentiment analysis [16] and spam detection [56,57].…”
Section: Multinomial Naive Bayesmentioning
confidence: 99%
“…This model gets input from a collection of texts and generates a vector of the words. This vector can be used to find the proximity of each word in the vector space [11]. Thus, this model can check all the representation that has been learned and displays the closest word [12], as shown in Table I.…”
Section: Word2vecmentioning
confidence: 99%