1994
DOI: 10.1080/10556789408805554
|View full text |Cite
|
Sign up to set email alerts
|

Multicategory discrimination via linear programming

Abstract: A single linear program is proposed for discriminating between the elements of k disjoint point sets in the n-dimensional real space Rn. When the conical hulls of the k sets are (k-1)-point disjoint in R"+', a k-piece piecewise-linear surface generated by the linear program completely sepwates the k sets. This improves on a previous linear programming approach which required that each set be linearly separable from the remaining k-1 sets. When the conical hulls of the k sets are not (k-1)-point d~~sjoint. the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
101
0

Year Published

2000
2000
2019
2019

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 139 publications
(101 citation statements)
references
References 9 publications
0
101
0
Order By: Relevance
“…We comment that the original bound in [192] contained an extra factor of log N multiplying VCdim(F) in (15).…”
Section: Theorem 2 ([192]mentioning
confidence: 99%
See 2 more Smart Citations
“…We comment that the original bound in [192] contained an extra factor of log N multiplying VCdim(F) in (15).…”
Section: Theorem 2 ([192]mentioning
confidence: 99%
“…In this section we present a brief survey of several extensions and generalizations, although many others exist, e.g. [76,158,62,18,31,3,155,15,44,52,82,164,66,38,137,147,16,80,185,100,14].…”
Section: Extensionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Perceptron Decision Trees (PDT) have been introduced by a number of authors under different names (Mangasarian et al, 1990;Bennett & Mangasarian, 1992, 1994a, 1994bBreiman et al, 1984;Broadley & Utgoff, 1995;Utgoff, 1989;Murthy, Kasif, & Salzberg, 1994). They are decision trees in which each internal node is associated with a hyperplane in general position in the input space.…”
Section: Introductionmentioning
confidence: 99%
“…They are decision trees in which each internal node is associated with a hyperplane in general position in the input space. They have been used in many real-world pattern classification tasks with good results (Bennett & Mangasarian 1994a;Murthy, Kasif, & Salzberg, 1994;Bennett, Wu, & Auslender, 1998). Given their high flexibility, a feature that they share with more standard decision trees such as the ones produced by C4.5 (Quinlan, 1993), they tend to overfit the data if their complexity is not somehow kept under control.…”
Section: Introductionmentioning
confidence: 99%