2009
DOI: 10.1111/j.1467-9868.2009.00718.x
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Additive Models

Abstract: We present a new class of methods for high dimensional non-parametric regression and classification called sparse additive models. Our methods combine ideas from sparse linear modelling and additive non-parametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. Sparse additive models are essentially a functional version of the grouped lasso of Yuan and Lin. They are also closely related to the COSSO mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

15
524
0
2

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 489 publications
(541 citation statements)
references
References 21 publications
15
524
0
2
Order By: Relevance
“…Lin and Zhang (2006) proposed the component selection and smoothing operator (COSSO) method for model selection in smoothing spline ANOVA with a fixed number of covariates. Recently, Ravikumar et al (2009), Meier et al (2009) and Huang et al (2010) considered variable selection in high dimensional nonparametric additive models where the number of additive components is larger than the sample size.…”
Section: Introductionmentioning
confidence: 99%
“…Lin and Zhang (2006) proposed the component selection and smoothing operator (COSSO) method for model selection in smoothing spline ANOVA with a fixed number of covariates. Recently, Ravikumar et al (2009), Meier et al (2009) and Huang et al (2010) considered variable selection in high dimensional nonparametric additive models where the number of additive components is larger than the sample size.…”
Section: Introductionmentioning
confidence: 99%
“…However, real data is not always Gaussian, and such an assumption can be limiting, especially when analyzing multiple data sources simultaneously, since non-Gaussianity in a single data source will result in the non-Gaussianity of the combined data. A lot of previous work has been done to drop the Gaussianity assumption in the solution to classic problems like sparse regression (Ravikumar et al, 2007), estimating GGMs (Liu et al, 2009), sparse CCA (Balakrishnan et al, 2012), etc., and propose nonparametric solutions to the same. We will also drop the assumption that the data is drawn from the same Gaussian distribution in the next section.…”
Section: Joint Estimation Of the Ggmmentioning
confidence: 99%
“…Instead, we seek a model that accommodates uncountably many predictors from the outset. Sparsity in the additive components, as considered for example in Ravikumar et al (2009) for additive modeling in the large p case, is not particularly meaningful in the context of a functional predictor. A more natural approach to overcome the inherent curse of dimensionality is to replace sparsity by continuity when predictors are smooth infinite-dimensional random trajectories, as considered here.…”
Section: Introductionmentioning
confidence: 99%