Abstract-This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal and a randomly chosen set of frequencies . Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set ?A typical result of this paper is as follows. Suppose that is a superposition offor some constant 0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1 ( ), can be reconstructed exactly as the solution to the 1 minimization problemIn short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of spikes may be recovered by convex programming from almost every set of frequencies of size ( log ). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1 ( ) would in general require a number of frequency samples at least proportional to log .The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one-or two-dimensional) object from incomplete frequency samples-provided that the number of jumps (discontinuities) obeys the condition above-by minimizing other convex functionals such as the total variation of .
Abstract-This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector from corrupted measurements = + . Here,is an by (coding) matrix and is an arbitrary and unknown vector of errors. Is it possible to recover exactly from the data ?We prove that under suitable conditions on the coding matrix , the input is the unique solution to the 1 -minimization problem ( This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of 1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.Index Terms-Basis pursuit, decoding of (random) linear codes, duality in optimization, Gaussian random matrices, 1 minimization, linear codes, linear programming, principal angles, restricted orthonormality, singular values of random matrices, sparse solutions to underdetermined systems.
Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y?To recover x 0 , we consider the solution x to the 1 -regularization problemwhere is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x 0 is sufficiently sparse, then the solution is within the noise levelAs a first example, suppose that A is a Gaussian random matrix; then stable recovery occurs for almost all such A's provided that the number of nonzeros of x 0 is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of x 0 ; then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/(log m) 6 .In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.
Abstract-Suppose we are given a vector f in a class F N , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision in the Euclidean (`2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector jfj (or of its coefficients in a fixed basis) obeys jfj (n) R 1 n 01=p , where R > 0 and p > 0. Suppose that we take measurements y k = hf; X k i;k = 1; . . . ; K, where the X k are N -dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0 < p < 1 and with overwhelming probability, our reconstruction f ] , defined as the solution to the constraints There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f . In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed.Index Terms-Concentration of measure, convex optimization, duality in optimization, linear programming, random matrices, random projections, signal recovery, singular values of random matrices, sparsity, trigonometric expansions, uncertainty principle.
This reprint differs from the original in pagination and typographic detail. 1 2 E. CANDES AND T. TAO would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level.In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.