We give a simple technique for verifying the Restricted Isometry Property (as introduced by Candès and Tao) for random matrices that underlies Compressed Sensing. Our approach has two main ingredients: (i) concentration inequalities for random inner products that have recently provided algorithmically simple proofs of the Johnson-Lindenstrauss lemma; and (ii) covering numbers for finite-dimensional balls in Euclidean space. This leads to an elementary proof of the Restricted Isometry Property and brings out connections between Compressed Sensing and the Johnson-Lindenstrauss lemma. As a result, we obtain simple and direct proofs of Kashin's theorems on widths of finite balls in Euclidean space (and their improvements due to Gluskin) and proofs of the existence of optimal Compressed Sensing measurement matrices. In the process, we also prove that these measurements have a certain universality with respect to the sparsity-inducing basis. Communicated by Emmanuel J. Candès.
Humans are visual animals, and imaging sensors that extend our reachcameras-have improved dramatically in recent times thanks to the introduction of CCD and CMOS digital technology. Consumer digital cameras in the megapixel range are now ubiquitous thanks to the happy coincidence that the semiconductor material of choice for large-scale electronics integration (silicon) also happens to readily convert photons at visual wavelengths into electrons. On the contrary, imaging at wavelengths where silicon is blind is considerably more complicated, bulky, and expensive. Thus, for comparable resolution, a US$500 digital camera for the visible becomes a US$50,000 camera for the infrared.In this article, we present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a much broader spectral range than conventional silicon-based cameras. Our approach fuses a new camera architecture
We consider the problem of reconstructing an unknown function f on a domain X from samples of f at n randomly chosen points with respect to a given measure ρX . Given a sequence of linear spaces (Vm)m>0 with dim(Vm) = m ≤ n, we study the least squares approximations from the spaces Vm. It is well known that such approximations can be inaccurate when m is too close to n, even when the samples are noiseless. Our main result provides a criterion on m that describes the needed amount of regularization to ensure that the least squares method is stable and that its accuracy, measured in L 2 (X, ρX ), is comparable to the best approximation error of f by elements from Vm. We illustrate this criterion for various approximation schemes, such as trigonometric polynomials, with ρX being the uniform measure, and algebraic polynomials, with ρX being either the uniform or Chebyshev measure.For such examples we also prove similar stability results using deterministic samples that are equispaced with respect to these measures.We let e m (f ) = f − P m f denote the best approximation error. *
In this paper we develop a theory of matrix completion for the extreme case of noisy 1-bit observations. Instead of observing a subset of the real-valued entries of a matrix M , we obtain a small number of binary (1-bit) measurements generated according to a probability distribution determined by the realvalued entries of M . The central question we ask is whether or not it is possible to obtain an accurate estimate of M from this data. In general this would seem impossible, but we show that the maximum likelihood estimate under a suitable constraint returns an accurate estimate of M when M ∞ ≤ α and rank(M ) ≤ r. If the log-likelihood is a concave function (e.g., the logistic or probit observation models), then we can obtain this maximum likelihood estimate by optimizing a convex program. In addition, we also show that if instead of recovering M we simply wish to obtain an estimate of the distribution generating the 1-bit measurements, then we can eliminate the requirement that M ∞ ≤ α. For both cases, we provide lower bounds showing that these estimates are near-optimal. We conclude with a suite of experiments that both verify the implications of our theorems as well as illustrate some of the practical applications of 1-bit matrix completion. In particular, we compare our program to standard matrix completion methods on movie rating data in which users submit ratings from 1 to 5. In order to use our program, we quantize this data to a single bit, but we allow the standard matrix completion program to have access to the original ratings (from 1 to 5). Surprisingly, the approach based on binary data performs significantly better.
Orthogonal Matching Pursuit (OMP) is the canonical greedy algorithm for sparse approximation. In this paper we demonstrate that the restricted isometry property (RIP) can be used for a very straightforward analysis of OMP. Our main conclusion is that the RIP of order K + 1 (with isometry constant δ <) is sufficient for OMP to exactly recover any K-sparse signal. Our analysis relies on simple and intuitive observations about OMP and matrices which satisfy the RIP. For restricted classes of K-sparse signals (those that are highly compressible), a relaxed bound on the isometry constant is also established. A deeper understanding of OMP may benefit the analysis of greedy algorithms in general. To demonstrate this, we also briefly revisit the analysis of the Regularized OMP (ROMP) algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.