1992
DOI: 10.1080/10556789208805504
|View full text |Cite
|
Sign up to set email alerts
|

Robust linear programming discrimination of two linearly inseparable sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
306
0
1

Year Published

2003
2003
2019
2019

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 637 publications
(307 citation statements)
references
References 2 publications
0
306
0
1
Order By: Relevance
“…We investigated methods that range from conventional decision trees, 15 Fishers Linear Discriminant 16 and Nearest Neighbor methods 17 to advanced learning techniques such as SVMs 5,7,8 and linear programming machines. 18,19 Contrary to Byvatov et al 12 we solely base our study on a Ghose-Crippen parametrization of the chemicals. 13 However, before feeding it into any of the learning machines we preprocess the GC descriptors by adding one and then computing the log.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We investigated methods that range from conventional decision trees, 15 Fishers Linear Discriminant 16 and Nearest Neighbor methods 17 to advanced learning techniques such as SVMs 5,7,8 and linear programming machines. 18,19 Contrary to Byvatov et al 12 we solely base our study on a Ghose-Crippen parametrization of the chemicals. 13 However, before feeding it into any of the learning machines we preprocess the GC descriptors by adding one and then computing the log.…”
Section: Methodsmentioning
confidence: 99%
“…18,19 To allow nonlinear classifications, one may nonlinearly map the examples into another feature space. In our experiments we used quadratic features (all first-and second-order monomials of the 120 input features).…”
Section: Methodsmentioning
confidence: 99%
“…[167] and [27] indicated that AdaBoost computes hypotheses with large margins, if one continues iterating after reaching zero classification error. It is clear that the margin should be as large as possible on most training examples in order to minimize the complexity term in (17). If one assumes that the base learner always achieves a weighted training error t ≤ 1/2 − γ/2 with γ > 0, then AdaBoost generates a hypothesis with margin larger than γ/2 [167,27].…”
Section: Boosting and Large Marginsmentioning
confidence: 99%
“…First we describe the DOOM approach [125] that uses a non-convex, monotone upper bound to the training error motivated from the margin-bounds. Then we discuss a linear program (LP) implementing a soft-margin [17,153] and outline algorithms to iteratively solve the linear programs [154,48,146].…”
Section: 127 12 51])mentioning
confidence: 99%
See 1 more Smart Citation