2019
DOI: 10.1016/j.neucom.2018.11.053
|View full text |Cite
|
Sign up to set email alerts
|

An imprecise extension of SVM-based machine learning models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 58 publications
(26 citation statements)
references
References 31 publications
0
26
0
Order By: Relevance
“…The reason may be that RF has advantages over SVM when dealing with unbalanced data. The repeated random sub-sampling in RF has been found to be very effective in dealing with an imbalanced dataset [54] whereas SVM assumes that the class distribution in the dataset is uniform [55].…”
Section: Results Of Experiments Imentioning
confidence: 99%
“…The reason may be that RF has advantages over SVM when dealing with unbalanced data. The repeated random sub-sampling in RF has been found to be very effective in dealing with an imbalanced dataset [54] whereas SVM assumes that the class distribution in the dataset is uniform [55].…”
Section: Results Of Experiments Imentioning
confidence: 99%
“…On the other hand the Random Forest is enhanced using the back propagation approach to remove the unwanted tress to improve the final decision. The performance of the present study is compared with top classification algorithms: support Vector Machine (SVM) [18],k-Nearest Neighbors (KNN) [12], Naïve bayes(NB) [11], Artificial neural network (ANN) [19]. A.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…A portion of the feature table created for training is shown in Table 2, where the class column can be seen. The support vector machine (SVM) is a supervised learning algorithm applied in countless fields to solve classification and regression problems [29]. Furthermore, following the literature [22,24,26], it has proven to be an effective technique to classify the surface roughness using higher dimensional data.…”
Section: Data Preparation and Model Trainingmentioning
confidence: 99%