2009
DOI: 10.1214/08-aos653
|View full text |Cite
|
Sign up to set email alerts
|

Near-ideal model selection by ℓ1 minimization

Abstract: We consider the fundamental problem of estimating the mean of a vector y = Xβ + z, where X is an n × p design matrix in which one can have far more variables than observations, and z is a stochastic error term-the so-called "p > n" setup. When β is sparse, or, more generally, when there is a sparse subset of covariates providing a close approximation to the unknown mean vector, we ask whether or not it is possible to accurately estimate Xβ using a computationally tractable algorithm.We show that, in a surprisi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

11
622
0
1

Year Published

2010
2010
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 412 publications
(634 citation statements)
references
References 29 publications
11
622
0
1
Order By: Relevance
“…solve a convex relaxation of the form 6) for some τ > 0. This 1 -constrained optimization is relatively easy to solve using convex optimization techniques.…”
Section: Inverse Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…solve a convex relaxation of the form 6) for some τ > 0. This 1 -constrained optimization is relatively easy to solve using convex optimization techniques.…”
Section: Inverse Problemsmentioning
confidence: 99%
“…For example, if the columns of A are scaled to have unit norm, recent work [6] suggests that the optimization in (1.6) will succeed (with high probability) only if the magnitudes of the non-zero components exceed a fixed constant times √ log n. In this chapter we will see that this fundamental limitation can again be overcome by sequentially designing the rows of A so that they tend to focus on the relevant components as information is gathered.…”
Section: Inverse Problemsmentioning
confidence: 99%
“…Assume that we have solved (2) using (6) to get the estimatex 0 with our current set of m measurements y. Now suppose that we get one new measurement given as…”
Section: A Update Of Least-squaresmentioning
confidence: 99%
“…Thus, no unbiased estimator can achieve an MSE below (9). A different strategy for appraising estimators is to compare practical techniques with the "oracle" estimator, which is the LS solution among vectors x whose support is Λ 0 [2].…”
Section: Performance Guaranteesmentioning
confidence: 99%