We prove two basic conjectures on the distribution of the smallest singular value of random n×n matrices with independent entries. Under minimal moment assumptions, we show that the smallest singular value is of order n −1/2 , which is optimal for Gaussian matrices. Moreover, we give a optimal estimate on the tail probability. This comes as a consequence of a new and essentially sharp estimate in the LittlewoodOfford problem: for i.i.d. random variables X k and real numbers a k , determine the probability p that the sum k a k X k lies near some number v. For arbitrary coefficients a k of the same order of magnitude, we show that they essentially lie in an arithmetic progression of length 1/p. Published by Elsevier Inc.
This paper improves upon best-known guarantees for exact reconstruction of a sparse signal f from a small universal sample of Fourier measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly nonconvex problem to a convex problem and then solve it as a linear program. We show that there exists a set of frequencies such that one can exactly reconstruct every r-sparse signal f of length n from its frequencies in , using the convex relaxation, and has size k.r; n/ D O.r log.n/ log 2 .r/ log.r log n// D O.r log 4 n/:A random set satisfies this with high probability. This estimate is optimal within the log log n and log 3 r factors. We also give a relatively short argument for a similar problem with k.r; n/ rOE12 C 8 log.n=r/ Gaussian measurements. We use methods of geometric functional analysis and probability theory in Banach spaces, which makes our arguments quite short.
We prove an optimal estimate of the smallest singular value of a random subGaussian matrix, valid for all dimensions. For an N n matrix A with independent and identically distributed sub-Gaussian entries, the smallest singular value of A is at least of the order p N p n 1 with high probability. A sharp estimate on the probability is also obtained.
In this expository note, we give a modern proof of Hanson-Wright inequality for quadratic forms in sub-gaussian random variables. We deduce a useful concentration inequality for sub-gaussian random vectors. Two examples are given to illustrate these results: a concentration of distances between random vectors and subspaces, and a bound on the norms of products of random and deterministic matrices.
Abstract. The classical random matrix theory is mostly focused on asymptotic spectral properties of random matrices as their dimensions grow to infinity. At the same time many recent applications from convex geometry to functional analysis to information theory operate with random matrices in fixed dimensions. This survey addresses the non-asymptotic theory of extreme singular values of random matrices with independent entries. We focus on recently developed geometric methods for estimating the hard edge of random matrices (the smallest singular value).
Random matrices are widely used in sparse recovery problems, and the relevant properties of matrices with i.i.d. entries are well understood. The current paper discusses the recently introduced Restricted Eigenvalue (RE) condition, which is among the most general assumptions on the matrix, guaranteeing recovery. We prove a reduction principle showing that the RE condition can be guaranteed by checking the restricted isometry on a certain family of low-dimensional subspaces. This principle allows us to establish the RE condition for several broad classes of random matrices with dependent entries, including random matrices with subgaussian rows and non-trivial covariance structure, as well as matrices with independent rows, and uniformly bounded entries.Here X is an n × p design matrix, Y is a vector of noisy observations, and ǫ is the noise term. Even in the noiseless case, recovering β (or its support) from (X, Y ) seems impossible when n ≪ p, given that we have more variables than observations.A line of recent research shows that when β is sparse, that is, when it has a relatively small number of nonzero coefficients, it is possible to recover β from an underdetermined system of equations. In order to * Keywords. ℓ1 minimization, Sparsity, Restricted Eigenvalue conditions, Subgaussian random matrices, Design matrices with uniformly bounded entries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.