2020
DOI: 10.21123/bsj.2020.17.4.1255
|View full text |Cite
|
Sign up to set email alerts
|

A Modified Support Vector Machine Classifiers Using Stochastic Gradient Descent with Application to Leukemia Cancer Type Dataset

Abstract: Support vector machines (SVMs) are supervised learning models that analyze data for classification or regression. For classification, SVM is widely used by selecting an optimal hyperplane that separates two classes. SVM has very good accuracy and extremally robust comparing with some other classification methods such as logistics linear regression, random forest, k-nearest neighbor and naïve model. However, working with large datasets can cause many problems such as time-consuming and inefficient results. In t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…The data are mapped to a high-dimensional space in which the discrete superplane can be found in situations when linear separability cannot be achieved. Using a method known as kernel function, the allocation is performed [27,28,29]. In training, 80% of the data was used, and in testing, 20%.…”
Section: Methodsmentioning
confidence: 99%
“…The data are mapped to a high-dimensional space in which the discrete superplane can be found in situations when linear separability cannot be achieved. Using a method known as kernel function, the allocation is performed [27,28,29]. In training, 80% of the data was used, and in testing, 20%.…”
Section: Methodsmentioning
confidence: 99%
“…GD-SVM is faster than QP, but it is still slow because computing ∇(𝛽 𝑗 ) takes 𝑝(𝑝) time in complexity, where 𝑝 is the size of the training dataset. If 𝑝 is large, GD-SVM is slow [18,19]. In Stochastic Gradient Descent (SGD), the value of the objective function is improved at each step.…”
Section: Enhanced Stochastic Gradient Descent Svmmentioning
confidence: 99%
“…In many cases, it is challenging to sample from a target posterior density. ThenMCMC is used 9 . There are three popular MCMC sampling techniques, such as Metropolis-Hastings, slice sampling 10 , and Gibbs sampling 11 .MCMC methods are derived from a Monte Carlo (MC) 12 .…”
Section: Markov Chain Monte Carlomentioning
confidence: 99%