2017
DOI: 10.1080/08839514.2018.1440907
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of Naive Bayes and Support Vector Machines for Wikipedia

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…It is recommended that future studies explore the model using Deep Learning and test it on different datasets to determine its strengths and weaknesses. Furthermore, hybrid testing needs to be implemented on the algorithms used as demonstrated in previous studies such as SVM with Naï ve Bayes [59], SVM with KNN [60], and others.…”
Section: Discussionmentioning
confidence: 99%
“…It is recommended that future studies explore the model using Deep Learning and test it on different datasets to determine its strengths and weaknesses. Furthermore, hybrid testing needs to be implemented on the algorithms used as demonstrated in previous studies such as SVM with Naï ve Bayes [59], SVM with KNN [60], and others.…”
Section: Discussionmentioning
confidence: 99%
“…This result difference was expected again as SVM is considered as a more robust algorithm for classification and has been reported to perform better than algorithms such as NB model in various complex applications. [40][41][42][43] Improving the data quality was another approach which was considered in this article so as to improve the accuracy results of the classification. The MGV is the most important of all the morphological features as it was the best input data variable which can be used to classify the inclusions.…”
Section: Discussionmentioning
confidence: 99%
“…This result difference was expected again as SVM is considered as a more robust algorithm for classification and has been reported to perform better than algorithms such as NB model in various complex applications. [ 40–43 ]…”
Section: Discussionmentioning
confidence: 99%
“…where: P(c): probability of class c P(X): probability of the predictors X P(X|c): probability of having X features given class c P(c|X): probability of an instance X belonging to class c given the value of its dependent variables [29] d. Support vector machines; using a dataset of n features, a Support Vector Machine (SVM) attempts to find a decision boundary which maximises the margin between two observed classes [30]. This makes it a robust choice for binary classification.…”
Section: Text Classificationmentioning
confidence: 99%