2021
DOI: 10.48550/arxiv.2110.09502
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Minimum $\ell_{1}$-norm interpolators: Precise asymptotics and multiple descent

Abstract: An evolving line of machine learning works observe empirical evidence that suggests interpolating estimators -the ones that achieve zero training error -may not necessarily be harmful. This paper pursues theoretical understanding for an important type of interpolators: the minimum 1-norm interpolator, which is motivated by the observation that several learning algorithms favor low 1-norm solutions in the over-parameterized regime. Concretely, we consider the noisy sparse regression model under Gaussian design,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
11
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(13 citation statements)
references
References 71 publications
2
11
0
Order By: Relevance
“…Figure 1 illustrates the above result for the minimum 2 -norm least squares estimator (Hastie et al, 2019) and the minimum 1 -norm least squares estimator (Li and Wei, 2021). The light-blue lines show the asymptotic risk profiles of the two procedures, which are non-monotonic as they diverge to infinity around the interpolation threshold of 1, at which the sample size and the number of features are equal.…”
Section: Introductionmentioning
confidence: 88%
See 3 more Smart Citations
“…Figure 1 illustrates the above result for the minimum 2 -norm least squares estimator (Hastie et al, 2019) and the minimum 1 -norm least squares estimator (Li and Wei, 2021). The light-blue lines show the asymptotic risk profiles of the two procedures, which are non-monotonic as they diverge to infinity around the interpolation threshold of 1, at which the sample size and the number of features are equal.…”
Section: Introductionmentioning
confidence: 88%
“…The MN1LS estimator connects naturally to the basis pursuit estimator in compressed sensing literature (e.g. Candes and Tao (2006); Donoho (2006)) and its risk in the proportional regime has been recently analyzed in Mitra (2019); Li and Wei (2021). The MN1LS predictor is now defined as…”
Section: Illustrative Prediction Proceduresmentioning
confidence: 99%
See 2 more Smart Citations
“…We analyze the cross-fitted AIPW estimator in the "proportional asymptotic regime", where the number of observations n and features p both diverge, with the ratio p/n converging to some constant κ > 0. This regime has attracted considerable recent attention in high-dimensional statistics [12, 13, 15-18, 23, 25, 29, 30, 38-40, 42, 44, 46, 55, 59, 61, 65, 82, 84, 94, 102, 104, 107, 109, 115, 117, 118], statistical machine learning and analysis of algorithms [31,36,60,67,68,71,72,74,79], econometrics [4,5,14,[26][27][28]51] etc, and shares roots with probability theory and statistical physics [78,120]. Asymptotic approximations derived under this regime demonstrate commendable performance even under moderate sample sizes (c.f.…”
mentioning
confidence: 99%