2019
DOI: 10.1007/s11265-019-01451-y
|View full text |Cite
|
Sign up to set email alerts
|

A Practical Method Based on Bayes Boundary-Ness for Optimal Classifier Parameter Status Selection

Abstract: We propose a novel practical method for finding the optimal classifier parameter status corresponding to the Bayes error (minimum classification error probability) through the evaluation of estimated class boundaries from the perspective of Bayes boundary-ness. While traditional methods approach classifier optimality from the angle of minimization of the estimated classification error probabilities, we approach it from the angle of optimality of the estimated classification boundaries. The optimal classificati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(12 citation statements)
references
References 15 publications
0
11
1
Order By: Relevance
“…The empirical formalization of boundary uncertainty used in Proposal 1 actually left some unclear items, such as the possible confusion of whether to use {i * (x), j * (x)} (top given class indexes) or {i(x; Λ), j (x; Λ)} (top predicted class indexes). Such confusion was especially possible in the several intricate branching treatments that appeared in Proposal 1, as well as in the treatment of multiclass data [18]. In contrast, this paper introduces a more complete formalization of boundary uncertainty (Section 4.3).…”
Section: Overview Of Proposalmentioning
confidence: 98%
See 4 more Smart Citations
“…The empirical formalization of boundary uncertainty used in Proposal 1 actually left some unclear items, such as the possible confusion of whether to use {i * (x), j * (x)} (top given class indexes) or {i(x; Λ), j (x; Λ)} (top predicted class indexes). Such confusion was especially possible in the several intricate branching treatments that appeared in Proposal 1, as well as in the treatment of multiclass data [18]. In contrast, this paper introduces a more complete formalization of boundary uncertainty (Section 4.3).…”
Section: Overview Of Proposalmentioning
confidence: 98%
“…Then, given a classifier status Λ, Proposal 1 empirically defined the following classifier evaluation metric that we estimated from the training set [18] and termed "boundary uncertainty": and achieving the minimum error probability (Bayes error). We describe this matter in more detail in later sections.…”
Section: Boundary Uncertaintymentioning
confidence: 99%
See 3 more Smart Citations