2018
DOI: 10.1007/s10994-018-5762-9
|View full text |Cite
|
Sign up to set email alerts
|

Learning rates for kernel-based expectile regression

Abstract: Conditional expectiles are becoming an increasingly important tool in finance as well as in other areas of applications. We analyse a support vector machine type approach for estimating conditional expectiles and establish learning rates that are minimax optimal modulo a logarithmic factor if Gaussian RBF kernels are used and the desired expectile is smooth in a Besov sense. As a special case, our learning rates improve the best known rates for kernel-based least squares regression in this scenario. Key ingred… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 45 publications
0
13
0
Order By: Relevance
“…Let S ≥ 7|O|/3. With probability larger than 1 − exp(−S/504), the estimators tδ,α λ,S defined in (16) with…”
Section: Application To Elastic Net With Huber Loss Functionmentioning
confidence: 99%
See 2 more Smart Citations
“…Let S ≥ 7|O|/3. With probability larger than 1 − exp(−S/504), the estimators tδ,α λ,S defined in (16) with…”
Section: Application To Elastic Net With Huber Loss Functionmentioning
confidence: 99%
“…It is clear that φ(•) = • 2 HK verifies Assumption 3 with η = 2 We establish oracle inequalities for f φ λ and f φ λ,S respectively defined in Equation ( 21) and ( 22) when the loss satisfies Assumption 2. In [36,33,47,43] for the quadratic loss function and [15,16] for the pinball loss (which is Lipschitz), the authors establish error bounds for when the target Y is assumed to satisfy Y ∈ [−M, M ] almost surely which is a really strong Assumption. Our analysis applies when the target Y is unbounded and may even be heavy-tailed which is, as far as we know, a new result.…”
Section: Application To Rkhsmentioning
confidence: 99%
See 1 more Smart Citation
“…Kernel methods [SS03, SVGDB + 02, LHG + 20] have demonstrated success in statistical learning, such as classification [ZH02, SML + 19], regression [SHFS19,FS19], and clustering [DGK04, TY19,LZL20]. The key ingredient of kernel methods is a kernel function, that is positive definite (RD) and can be associated with the inner product of two vectors in a reproducing kernel Hilbert space (RKHS).…”
Section: Introductionmentioning
confidence: 99%
“…These include existence, uniqueness and universal consistency as well as specific learning rates, with the last one typically requiring some more conditions on P than the former properties for which (almost) no such conditions are needed. In addition to the books mentioned at the beginning of this article, which all include extensive introductions to SVMs as well as many results on the aforementioned properties, some more specific results on learning rates can, for example, be found in Caponnetto and De Vito (2007), Smale and Zhou (2007), Xiang and Zhou (2009), Steinwart et al (2009), Steinwart (2011, 2013), Farooq and Steinwart (2019). We refer to Christmann and Hable (2012), Christmann and Zhou (2016a) for results on SVMs for additive models and to Christmann and Zhou (2016b), Gensler and Christmann (2020) for results on kernel-based pairwise learning.…”
Section: Introductionmentioning
confidence: 99%