2016
DOI: 10.1016/j.jcss.2016.04.003
|View full text |Cite
|
Sign up to set email alerts
|

Multi-category classifiers and sample width

Abstract: In a recent paper, the authors introduced the notion of sample width for binary classifiers defined on the set of real numbers. It was shown that the performance of such classifiers could be quantified in terms of this sample width. This paper considers how to adapt the idea of sample width so that it can be applied in cases where the classifiers are multi-category and are defined on some arbitrary metric space.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…Margin-based results apply when the classifiers are derived from real-valued function by 'thresholding' (taking their sign). A more direct approach which does not require real-valued functions as a basis for classification margin, uses the concept of width (introduced in [3]) and studied in various settings in [4][5][6][7][8][9][10][11][12].…”
Section: Probabilistic Modeling Of Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Margin-based results apply when the classifiers are derived from real-valued function by 'thresholding' (taking their sign). A more direct approach which does not require real-valued functions as a basis for classification margin, uses the concept of width (introduced in [3]) and studied in various settings in [4][5][6][7][8][9][10][11][12].…”
Section: Probabilistic Modeling Of Learningmentioning
confidence: 99%
“…, where S − and S + are any disjoint subsets of the input space that are labeled −1 and 1, respectively. (In [7][8][9], a slightly different definition of width was used where the union of the disjoint sets S − and S + equals the input space. )…”
Section: Width and Error Of A Classifiermentioning
confidence: 99%
See 1 more Smart Citation
“…In [3], we obtained generalization error bounds for learning binary classifiers on a finite metric space X using the class of all binary functions on X ; and [6] obtained error bounds for multi-category classification on infinite metric spaces. In both papers, the bounds involved the covering number of the metric space, which in general is not known or not easy to compute, though can be approximated numerically.…”
Section: Introductionmentioning
confidence: 99%
“…The learning error bounds involve covering numbers of the metric space. In [6,4], multi-category classification over metric spaces is considered and the error bounds also involve the covering number of the metric space. Other work on learning over metric spaces includes [11] (see also references within) which considers learning nearest-neighbor classifiers in semimetric spaces using compression schemes which involves bounds on packing numbers by exponentials in the density-dimension of the space.…”
Section: Introductionmentioning
confidence: 99%