1971
DOI: 10.1016/0022-247x(71)90184-3
|View full text |Cite
|
Sign up to set email alerts
|

Some results on Tchebycheffian spline functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
730
0
9

Year Published

1997
1997
2015
2015

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 1,107 publications
(748 citation statements)
references
References 7 publications
1
730
0
9
Order By: Relevance
“…Additional constraints are necessary to define the "optimal" hyperplane, typically in the form of maximal margin constraints maximizing the closest distance between the training points and the hyperplane. Under these assumptions (see references for details), the representer theorem 11,12 states that solving for the optimal hyperplane leads to a convex quadratic optimization problem such that the solution vector w is a linear combination of a subset of the training vectors, the support vectors, such that w ) Σ i)1 n R i x i , for some R i ∈ R, i ) 1, ..., n. Thus, f can thus be rewritten as As a side note, it is even possible to write the coefficients R i in the stronger form R i ) i y i with i g 0. If the problem is not exactly linearly separable, then there is a standard convex generalization of this approach using slack variables to allow for some of the classification constraints to be violated.…”
Section: Kernel Methodsmentioning
confidence: 99%
“…Additional constraints are necessary to define the "optimal" hyperplane, typically in the form of maximal margin constraints maximizing the closest distance between the training points and the hyperplane. Under these assumptions (see references for details), the representer theorem 11,12 states that solving for the optimal hyperplane leads to a convex quadratic optimization problem such that the solution vector w is a linear combination of a subset of the training vectors, the support vectors, such that w ) Σ i)1 n R i x i , for some R i ∈ R, i ) 1, ..., n. Thus, f can thus be rewritten as As a side note, it is even possible to write the coefficients R i in the stronger form R i ) i y i with i g 0. If the problem is not exactly linearly separable, then there is a standard convex generalization of this approach using slack variables to allow for some of the classification constraints to be violated.…”
Section: Kernel Methodsmentioning
confidence: 99%
“…However when using Boosting procedures on noisy realworld data, it turns out that regularization (e.g. [103,186,143,43]) is mandatory if overfitting is to be avoided (cf. Section 6).…”
Section: Learning From Data and The Pac Propertymentioning
confidence: 99%
“…, where we used the Representer Theorem [103,172] that shows that the maximum margin solution w can be written as a sum of the mapped examples, i.e. w = N n=1 β n Φ(x n ).…”
Section: Support Vector Machines (P = 2)mentioning
confidence: 99%
“…Kernel logistic regression produces a nonlinear decision boundary, f (x), by forming a linear decision boundary in the space of the non-linearly transformed input vectors. By the representer theorem [19], the optimal f (x) has the form:…”
Section: Robust Kernel Logistic Regressionmentioning
confidence: 99%