2018 IEEE World Symposium on Communication Engineering (WSCE) 2018
DOI: 10.1109/wsce.2018.8690536
|View full text |Cite
|
Sign up to set email alerts
|

Classification and Optimization Scheme for Text Data using Machine Learning Naïve Bayes Classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(9 citation statements)
references
References 7 publications
0
6
0
Order By: Relevance
“…For supervised classification, if we assume all the categories follow independent multinomial distribution and each document is a sample generated by the distribution, a straight-forward idea would be applying some linear model to do classification, such as Support Vector Machine [4,13], which is used to find the maximum-margin hyper-plane that divides the documents with different labels. Under these assumptions, another important method is Naive Bayes (NB) [7,15,26,31], which uses scores based on the 'probabilities' of each document conditioned on the categories. NB classifier learns from training data to estimate the distribution of each category, then computes the conditional probability of each document given the class label by applying Bayes rule.…”
Section: Introductionmentioning
confidence: 99%
“…For supervised classification, if we assume all the categories follow independent multinomial distribution and each document is a sample generated by the distribution, a straight-forward idea would be applying some linear model to do classification, such as Support Vector Machine [4,13], which is used to find the maximum-margin hyper-plane that divides the documents with different labels. Under these assumptions, another important method is Naive Bayes (NB) [7,15,26,31], which uses scores based on the 'probabilities' of each document conditioned on the categories. NB classifier learns from training data to estimate the distribution of each category, then computes the conditional probability of each document given the class label by applying Bayes rule.…”
Section: Introductionmentioning
confidence: 99%
“…The research in [8] showed that in the classification of those three different datasets, the accuracy of the proposed method is higher than the accuracy of the classical Gaussian Naïve Bayes classifier. While explaining the drawbacks of Hadoop MapReduce in performing text classification, the authors in [7] argued that their proposed machine learning approach to classifying text data is less time consuming than that achieved with Hadoop MapReduce. Classification in Hadoop uses K-Means Clustering, which requires a large amount of time to perform the classification, thereby increasing the latency [7].…”
Section: Text Classification Using Machine Learningmentioning
confidence: 99%
“…While explaining the drawbacks of Hadoop MapReduce in performing text classification, the authors in [7] argued that their proposed machine learning approach to classifying text data is less time consuming than that achieved with Hadoop MapReduce. Classification in Hadoop uses K-Means Clustering, which requires a large amount of time to perform the classification, thereby increasing the latency [7]. Motivated by this, the authors in [7] proposed a machine learning method based on a Naïve Bayes classifier.…”
Section: Text Classification Using Machine Learningmentioning
confidence: 99%
See 2 more Smart Citations