2009
DOI: 10.1214/09-aos692
|View full text |Cite
|
Sign up to set email alerts
|

High-dimensional additive modeling

Abstract: We propose a new sparsity-smoothness penalty for high-dimensional generalized additive models. The combination of sparsity and smoothness is crucial for mathematical theory as well as performance for finite-sample data. We present a computationally efficient algorithm, with provable numerical convergence properties, for optimizing the penalized likelihood. Furthermore, we provide oracle results which yield asymptotic optimality of our estimator for high dimensional but sparse additive models. Finally, an adapt… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

6
411
1

Year Published

2010
2010
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 353 publications
(422 citation statements)
references
References 30 publications
6
411
1
Order By: Relevance
“…However, the original Lasso method has mainly been designed for regression models with a linear prediction function. Combining penalized estimation with sparse nonlinear additive modeling has only recently been accomplished (Meier et al 2009). To date, there is no extension of the method developed by Meier et al (2009) to geoadditive proportional odds models.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the original Lasso method has mainly been designed for regression models with a linear prediction function. Combining penalized estimation with sparse nonlinear additive modeling has only recently been accomplished (Meier et al 2009). To date, there is no extension of the method developed by Meier et al (2009) to geoadditive proportional odds models.…”
Section: Discussionmentioning
confidence: 99%
“…Combining penalized estimation with sparse nonlinear additive modeling has only recently been accomplished (Meier et al 2009). To date, there is no extension of the method developed by Meier et al (2009) to geoadditive proportional odds models. On the other hand, gradient boosting and the Lasso are closely related, as both algorithms can be embedded into the LARS framework (Efron et al 2004).…”
Section: Discussionmentioning
confidence: 99%
“…When p is fixed and small, the ordinary least squares estimators can be obtained for (2.3). When p is moderately high, penalized regression methods with grouped penalties have been well studied for (2.3), such as Lin and Zhang (2006), Meier, Geer and Bühlmann (2009) and Huang, Horowitz and Wei (2010). When p is much higher than the sample size, Fan, Feng and Song (2011) proposed a nonparametric independence screening (NIS) to reduce the dimensionality efficiently.…”
Section: Model Setupsmentioning
confidence: 99%
“…In the literature, penalized regression methods have been well studied for nonparametric additive models. See Lin and Zhang (2006), Meier, Geer andBühlmann (2009) andHuang, Horowitz andWei (2010). For sparse ultrahigh dimensional additive models, Fan, Feng and Song (2011) designed a nonparametric independence screening (NIS) which fits marginal nonparametric regressions of the response against each predictor individually and ranks their importance according to the magnitude of estimated nonparametric components.…”
Section: Introductionmentioning
confidence: 99%
“…Lin and Zhang (2006) proposed the component selection and smoothing operator (COSSO) method for model selection in smoothing spline ANOVA with a fixed number of covariates. Recently, Ravikumar et al (2009), Meier et al (2009) and Huang et al (2010) considered variable selection in high dimensional nonparametric additive models where the number of additive components is larger than the sample size.…”
Section: Introductionmentioning
confidence: 99%