2007
DOI: 10.1214/07-ejs077
|View full text |Cite
|
Sign up to set email alerts
|

Optimal rates and adaptation in the single-index model using aggregation

Abstract: We want to recover the regression function in the single-index model. Using an aggregation algorithm with local polynomial estimators, we answer in particular to the second part of Question~2 from Stone (1982) on the optimal convergence rate. The procedure constructed here has strong adaptation properties: it adapts both to the smoothness of the link function and to the unknown index. Moreover, the procedure locally adapts to the distribution of the design. We propose new upper bounds for the local polynomial … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
33
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 42 publications
(33 citation statements)
references
References 24 publications
0
33
0
Order By: Relevance
“…Indeed, convex aggregation methods typically depend on some tuning parameter (such as the temperature in the case of a Gibbs prior and a Bayesian aggregation procedure [4,10]. Of course, one is led to choose the tuning parameter that minimizes the empirical risk, and this choice turns out to be a rather efficient one, as shown by some empirical studies in [7]. Nevertheless, there is no known theoretical result on the choice of the tuning parameter.…”
Section: Question 12 Is Empirical Minimization Performed On Conv(f) mentioning
confidence: 99%
“…Indeed, convex aggregation methods typically depend on some tuning parameter (such as the temperature in the case of a Gibbs prior and a Bayesian aggregation procedure [4,10]. Of course, one is led to choose the tuning parameter that minimizes the empirical risk, and this choice turns out to be a rather efficient one, as shown by some empirical studies in [7]. Nevertheless, there is no known theoretical result on the choice of the tuning parameter.…”
Section: Question 12 Is Empirical Minimization Performed On Conv(f) mentioning
confidence: 99%
“…He derive asymptotic results for projection estimators, in the specic case of projection pursuit constraints. More recently, Gaïas and Lecué (2007) compute minimax rates and build an aggregation procedure with local polynomial estimates, which attains these rates. Lepski's methods are probably rst used by Goldenshluger and Lepski (2009) in the multidimensional white noise model: the results can be applied to several structural models (including the single-index model as well as additive models, or multi-index models), under restricted smoothness assumptions on the function to recover.…”
Section: Concluding Remarks and Outlookmentioning
confidence: 99%
“…Precise risk bounds for the EWA in the model of regression with fixed design have been established in (Chernousova et al, 2013;Dai et al, 2014;Dalalyan and Salmon, 2012;Dalalyan and Tsybakov, 2007, 2012bGolubev and Ostrovski, 2014;Leung and Barron, 2006). In the model of regression with random design, the counterpart of the EWA, often referred to as mirror averaging, has been thoroughly studied in (Audibert, 2009;Chesneau and Lecué, 2009;Dalalyan and Tsybakov, 2012a;Gaïffas and Lecué, 2007;Juditsky et al, 2008;Lecué and Mendelson, 2013;Yuditskiȋ et al, 2005). Note that when the temperature τ equals σ 2 /n, the EWA coincides with the Bayesian posterior mean in the regression model with Gaussian noise provided that the prior is defined by π 0 (β) ∝ exp(−λ Pen(β)/τ ).…”
Section: Introductionmentioning
confidence: 99%