2023
DOI: 10.1016/j.cor.2022.106131
|View full text |Cite
|
Sign up to set email alerts
|

Comparing two SVM models through different metrics based on the confusion matrix

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 71 publications
(34 citation statements)
references
References 38 publications
1
18
0
Order By: Relevance
“…Through a comparison of the predicted and true classes, it demonstrates the number of correct and wrong predictions for each class. This information may be utilized to make decisions and optimize our algorithms [79]. These results were comparable to the testing accuracy of 99% recorded by Albarrak [80] for date fruit classification.…”
Section: Cnn Performancessupporting
confidence: 78%
“…Through a comparison of the predicted and true classes, it demonstrates the number of correct and wrong predictions for each class. This information may be utilized to make decisions and optimize our algorithms [79]. These results were comparable to the testing accuracy of 99% recorded by Albarrak [80] for date fruit classification.…”
Section: Cnn Performancessupporting
confidence: 78%
“…In the confusion matrix in a multiclassification model, TP represents the actual case, that is, the number of samples in which the model correctly classifies positive cases as positive cases; TN represents real negative cases, that is, the number of pieces in which the model correctly classifies negative cases as negative cases; FP describes false positive cases, that is, the number of elements in which the model incorrectly classifies negative cases as positive cases; FN represents false negative cases, that is, the number of pieces in which the model incorrectly classifies positive cases as negative cases. The evaluation indicators are calculated as follows [24]…”
Section: Evaluation Index Of Adversarial Domain Adaptation For Grindi...mentioning
confidence: 99%
“…Then, the data set is divided into three different subsets: a training set, a validation set, and a test set. Selection of feature subset and ML algorithm: The minimum redundancy maximum relevance (MRMR) algorithm is used to rank all candidate features, and the optimal feature subset and ML algorithm are determined. Model optimization: The hyper-parameters of the selected optimal ML algorithm (the k- nearest neighbor algorithm, k- NN) are further adjusted using the Bayesian optimization algorithm (BOA) with a cost-sensitive learning approach Model evaluation: The model’s performance is comprehensively evaluated using confusion matrixes Model Interpretation: The local interpretable model-agnostic explanations (LIME) and partial dependence plots (PDP) methods are used to interpret the model, and to understand the effect of different features on predictions. …”
Section: Developing An Interpretable Optimal K-nn Model To Classify T...mentioning
confidence: 99%
“…For the sake of further explanation, Figures – present the confusion matrices of all sets used for training, validation and testing. In the matrix, the two green squares show correctly classified samples, while the two red squares express misclassified samples.…”
Section: Developing An Interpretable Optimal K-nn Model To Classify T...mentioning
confidence: 99%