2023
DOI: 10.1007/s11042-023-14689-3
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of machine learning techniques for spam detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 107 publications
0
3
0
Order By: Relevance
“…Recent work by Ghosh and Senthilrajan [25] classifies spam emails using machine learning classifiers and evaluates the performance of these classifiers. The authors implemented thirteen machine learning classifiers: the adaptive booster, artificial neural network, bootstrap aggregating, decision table, decision tree, J48, K-nearest neighbor, linear regression, logistic regression, naïve Bayes, random forest (RF), sequential minimal optimization, and SVM methods.…”
Section: Machine Learning Approachesmentioning
confidence: 99%
“…Recent work by Ghosh and Senthilrajan [25] classifies spam emails using machine learning classifiers and evaluates the performance of these classifiers. The authors implemented thirteen machine learning classifiers: the adaptive booster, artificial neural network, bootstrap aggregating, decision table, decision tree, J48, K-nearest neighbor, linear regression, logistic regression, naïve Bayes, random forest (RF), sequential minimal optimization, and SVM methods.…”
Section: Machine Learning Approachesmentioning
confidence: 99%
“…Ghosh and Senthilrajan ( 2023 ) used the Spambase data set (Mark Hopkins, 1999 ) to develop a framework for email evaluation. The research focuses on the comparison of the performance of 13 different classifiers, including SVMs, considering a set of only eight attributes.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, a new instance is predicted by combining the prediction of these trees (i.e., majority vote) [11]. Te RF algorithm reduces the correlation among the trees because the RF algorithm randomly chooses variables at each node, which helps to achieve an efcient prediction by this classifer [12]. Te RF algorithm has many decision trees, which makes it a robust and efcient algorithm [13].…”
Section: Random Forestmentioning
confidence: 99%