Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020
DOI: 10.1109/tsp.2020.2964216
|View full text |Cite
|
Sign up to set email alerts
|

Quickly Finding the Best Linear Model in High Dimensions via Projected Gradient Descent

Abstract: We study the problem of finding the best linear model that can minimize least-squares loss given a dataset. While this problem is trivial in the low-dimensional regime, it becomes more interesting in high-dimensions where the population minimizer is assumed to lie on a manifold such as sparse vectors. We propose projected gradient descent (PGD) algorithm to estimate the population minimizer in the finite sample regime. We establish linear convergence rate and data-dependent estimation error bounds for PGD. Our… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 36 publications
2
7
0
Order By: Relevance
“…To the best of our knowledge, Proposition 1.4 is a new result, but it bears resemblance with a recent finding of Sattar and Oymak [46,Thm. 3.4], who consider a similar model setup with sub-exponential input vectors.…”
Section: A Simple Error Bound For Sub-exponential Input Vectorssupporting
confidence: 84%
See 2 more Smart Citations
“…To the best of our knowledge, Proposition 1.4 is a new result, but it bears resemblance with a recent finding of Sattar and Oymak [46,Thm. 3.4], who consider a similar model setup with sub-exponential input vectors.…”
Section: A Simple Error Bound For Sub-exponential Input Vectorssupporting
confidence: 84%
“…A more astonishing feature of the generalized Lasso (LS K ) is its ability to deal with non-linear relations between x and y. In fact, inspired by a classical result of Brillinger [5], a recent work of Plan and Vershynin [42] shows that for Gaussian input vectors, (LS K ) yields a consistent estimator for single-index models, i.e., y = f ( x, β 0 ) with an unknown, non-linear distortion function f : R → R. This finding has triggered a lot of related and follow-up research, e.g., see [14,16,19,38,46,[52][53][54]. We note that these works form only a small fraction of a whole research area on non-linear observation models, lying at the interface of statistics, learning theory, signal processing, and compressed sensing.…”
Section: Introductionmentioning
confidence: 93%
See 1 more Smart Citation
“…Here γ k > 0 is the step size at iteration k. Bahmani and Raj (Bahmani and Raj, 2013) analyzed the rate of convergence of x k to s as a function of p, showing that the sufficient conditions for exact signal recovery become more stringent while robustness to noise and convergence rate worsen, as p increases from 0 to 1. Oymak et al (Oymak et al, 2017;Sattar and Oymak, 2020) extended the analysis for more general (non-convex) constraint sets including p -balls. Their experiments compared cases p = 0, 0.5, and 1, and suggest that while p = 0 outperforms p = 1, it is dominated by p = 0.5.…”
Section: Compressed Sensingmentioning
confidence: 99%
“…Specifically, first-order methods are among the most popular and well-established iterative optimization techniques due to their low per-iteration complexity and efficiency in complex scenarios. One of the most prominent first-order optimization techniques suitable for our problem in ( 14) is the projected gradient descent algorithm (PGD) [54]- [59]. In particular, assuming z 0 is the initial point for the algorithm, the l-th iteration of PGD to approximate the solution to ( 14) can be defined as follows:…”
Section: B Architecture Of the Decoding Modulementioning
confidence: 99%