2022
DOI: 10.48550/arxiv.2208.02753
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Spectral Universality of Regularized Linear Regression with Nearly Deterministic Sensing Matrices

Abstract: It has been observed that the performances of many high-dimensional estimation problems are universal with respect to underlying sensing (or design) matrices. Specifically, matrices with markedly different constructions seem to achieve identical performance if they share the same spectral distribution and have "generic" singular vectors. We prove this universality phenomenon for the case of convex regularized least squares (RLS) estimators under a linear regression model with additive Gaussian noise. Our main … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 60 publications
(195 reference statements)
0
2
0
Order By: Relevance
“…This intuition comes from a very recent line of work concerning linear regression and phase retrieval with structured matrices of covariates. Indeed, the authors of [37,38,39] show that in this context, the class of rotationally invariant matrices leads to the same performance as a much broader class of almost deterministic matrices (with the same spectral density), also when AMP or its linearized version are used as inference algorithm. This is a different setting from the one we consider, since in our setup the structured matrix is the noise, but it nevertheless suggests that our predictions should remain true more generically.…”
Section: Comments On the Potential Universality Of Our Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This intuition comes from a very recent line of work concerning linear regression and phase retrieval with structured matrices of covariates. Indeed, the authors of [37,38,39] show that in this context, the class of rotationally invariant matrices leads to the same performance as a much broader class of almost deterministic matrices (with the same spectral density), also when AMP or its linearized version are used as inference algorithm. This is a different setting from the one we consider, since in our setup the structured matrix is the noise, but it nevertheless suggests that our predictions should remain true more generically.…”
Section: Comments On the Potential Universality Of Our Resultsmentioning
confidence: 99%
“…The most tricky parameter is M , that we introduced to decouple the four body interactions in the Hamiltonian. Notice first that (recall definitions (39) and ( 41))…”
Section: Replica Saddle Point Equationsmentioning
confidence: 99%