A general approach for estimating an unknown signal x 0 ∈ R n from noisy, linear measurements y = Ax 0 + z ∈ R m is via solving a so called regularized M-estimator:x := arg min x L(y − Ax) + λf (x). Here, L is a convex loss function, f is a convex (typically, nonsmooth) regularizer, and, λ > 0 a regularizer parameter. We analyze the squared error performance x − x 0 2 2 of such estimators in the high-dimensional proportional regime where m, n → ∞ and m/n → δ. We let the design matrix A have entries iid Gaussian, and, impose minimal and rather mild regularity conditions on the loss function, on the regularizer, and, on the distributions of the noise and of the unknown signal. Under such a generic setting, we show that the squared error converges in probability to a nontrivial limit that is computed by solving four nonlinear equations on four scalar unknowns. We identify a new summary parameter, termed the expected Moreau envelope, which determines how the choice of the loss function and of the regularizer affects the error performance. The result opens the way for answering optimality questions regarding the choice of the loss function, the regularizer, the penalty parameter, etc. I. INTRODUCTIONNoisy Linear Measurements. We consider the standard problem of recovering an unknown signal x 0 ∈ R n from a vector y ∈ R m of m noisy, linear observations given by y = Ax 0 + z ∈ R m . Here, A ∈ R m×n is the (known) measurement matrix, and, z ∈ R m is the noise vector; the latter is generated from some distribution density in R m , say p z . High-dimensional regime. We focus on the highdimensional regime regime where both the dimensions of the ambient space n and the number of measurements m are large [1], [2]. This is different than the more
We consider the problem of estimating an unknown but structured signal x0 from its noisy linear observations y = Ax0 + z ∈ R m . To the structure of x0 is associated a structure inducing convex function f (·). We assume that the entries of A are i.i.d. standard normal N (0, 1) and z ∼ N (0, σ 2 Im). As a measure of performance of an estimate x * of x0 we consider the "Normalized Square Error" (NSE) x * − x0 2 2 /σ 2 . For sufficiently small σ, we characterize the exact performance of two different versions of the well known LASSO algorithm. The first estimator is obtained by solving the problem arg minx y − Ax 2 + λf (x). As a function of λ, we identify three distinct regions of operation. Out of them, we argue that "RON " is the most interesting one. When λ ∈ RON , we show that the NSE isis the expected squared-distance of an i.i.d. standard normal vector to the dilated subdifferential λ · ∂f (x0). Secondly, we consider the more popular estimator arg minx 1 2 y − Ax 2 2 + στ f (x). We propose a formula for the NSE of this estimator by establishing a suitable mapping between this and the previous estimator over the region RON . As a useful side result, we find explicit formulae for the optimal estimation performance and the optimal penalty parameters λ * and τ * .
The maximum-likelihood (ML) decoder for symbol detection in large multiple-input multiple-output wireless communication systems is typically computationally prohibitive. In this paper, we study a popular and practical alternative, namely the Box-relaxation optimization (BRO) decoder, which is a natural convex relaxation of the ML. For iid real Gaussian channels with additive Gaussian noise, we obtain exact asymptotic expressions for the symbol error rate (SER) of the BRO. The formulas are particularly simple, they yield useful insights, and they allow accurate comparisons to the matchedfilter bound (MFB) and to the zero-forcing decoder. For BPSK signals the SER performance of the BRO is within 3dB of the MFB for square systems, and it approaches the MFB as the number of receive antennas grows large compared to the number of transmit antennas. Our analysis further characterizes the empirical density function of the solution of the BRO, and shows that error events for any fixed number of symbols are asymptotically independent. The fundamental tool behind the analysis is the convex Gaussian min-max theorem.
We consider the Gaussian multiple-access channel with two critical departures from the classical asymptotics: a) number of users proportional to blocklength and b) each user sends a fixed number of data bits. We provide improved bounds on the tradeoff between the user density and the energy-per-bit. Interestingly, in this information-theoretic problem we rely on Gordon's lemma from Gaussian process theory. From the engineering standpoint, we discover a surprising new effect: good coded-access schemes can achieve perfect multi-user interference cancellation at low user density.In addition, by a similar method we analyze the limits of false-discovery in binary sparse regression problem in the asymptotic regime of number of measurements going to infinity at fixed ratios with problem dimension, sparsity and noise level. Our rigorous bound matches the formal replica-method prediction for some range of parameters with imperceptible numerical precision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.