2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP) 2018
DOI: 10.1109/mlsp.2018.8516976
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Classifier Model Status Selection Using Bayes Boundary Uncertainty

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…Experimental results for the method showed its fundamental utility, but these results were not so helpful due to the insufficient implementation of procedures such as finding near-boundary samples, which are defined below, and calculating the entropy. We also applied our method to a task of optimally setting the width of a Gaussian kernel for Support Vector Machine (SVM), while making some improvements to the implementation [20,21]. The experimental results [21] showed its utility more clearly than did the previous experiment [19], but there remained issues to be improved.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Experimental results for the method showed its fundamental utility, but these results were not so helpful due to the insufficient implementation of procedures such as finding near-boundary samples, which are defined below, and calculating the entropy. We also applied our method to a task of optimally setting the width of a Gaussian kernel for Support Vector Machine (SVM), while making some improvements to the implementation [20,21]. The experimental results [21] showed its utility more clearly than did the previous experiment [19], but there remained issues to be improved.…”
Section: Introductionmentioning
confidence: 99%
“…We also applied our method to a task of optimally setting the width of a Gaussian kernel for Support Vector Machine (SVM), while making some improvements to the implementation [20,21]. The experimental results [21] showed its utility more clearly than did the previous experiment [19], but there remained issues to be improved. For example, the improved implementation provided suboptimal classifier statuses for some difficult datasets; furthermore, even if the concept basically worked well, it was somewhat heuristically implemented, and thus its theoretical validity was not sufficiently supported.…”
Section: Introductionmentioning
confidence: 99%