We give a simple technique for verifying the Restricted Isometry Property (as introduced by Candès and Tao) for random matrices that underlies Compressed Sensing. Our approach has two main ingredients: (i) concentration inequalities for random inner products that have recently provided algorithmically simple proofs of the Johnson-Lindenstrauss lemma; and (ii) covering numbers for finite-dimensional balls in Euclidean space. This leads to an elementary proof of the Restricted Isometry Property and brings out connections between Compressed Sensing and the Johnson-Lindenstrauss lemma. As a result, we obtain simple and direct proofs of Kashin's theorems on widths of finite balls in Euclidean space (and their improvements due to Gluskin) and proofs of the existence of optimal Compressed Sensing measurement matrices. In the process, we also prove that these measurements have a certain universality with respect to the sparsity-inducing basis. Communicated by Emmanuel J. Candès.
Bevacizumab in combination with carboplatin and paclitaxel improved overall response and time to progression in patients with advanced or recurrent non-small-cell lung cancer. Patients with nonsquamous cell histology appear to be a subpopulation with improved outcome and acceptable safety risks.
Under certain conditions (known as the Restricted Isometry Property or RIP) on the m × Nmatrix Φ (where m < N ), vectors x ∈ R N that are sparse (i.e. have most of their entries equal to zero) can be recovered exactly from y := Φx even though Φ −1 (y) is typically an (N − m)-dimensional hyperplane; in addition x is then equal to the element in Φ −1 (y) of minimal ℓ 1 -norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an Iteratively Re-weighted Least Squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Φ −1 (y) with smallest ℓ 2 (w)-norm. If x (n) is the solution at iteration step n,, i = 1, . . . , N , for a decreasing sequence of adaptively defined ǫ n ; this updated weight is then used to obtain x (n+1) and the process is repeated. We prove that when Φ satisfies the RIP conditions, the sequence xconverges for all y, regardless of whether Φ −1 (y) contains a sparse vector. If there is a sparse vector in Φ −1 (y), then the limit is this sparse vector, and when x (n) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the "heavier" weight, i = 1, . . . , N , where 0 < τ < 1, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for τ approaching to zero.
This first randomized trial in this setting demonstrates that D75 every 3 weeks can offer clinically meaningful benefit to patients with advanced NSCLC whose disease has relapsed or progressed after platinum-based chemotherapy.
Compressed sensing is a new concept in signal processing where one seeks to minimize the number of measurements to be taken from signals while still retaining the information necessary to approximate them well. The ideas have their origins in certain abstract results from functional analysis and approximation theory by Kashin [23] but were recently brought into the forefront by the work of Candès, Romberg and Tao [7,5,6] and Donoho [9] who constructed concrete algorithms and showed their promise in application. There remain several fundamental questions on both the theoretical and practical side of compressed sensing. This paper is primarily concerned about one of these theoretical issues revolving around just how well compressed sensing can approximate a given signal from a given budget of fixed linear measurements, as compared to adaptive linear measurements. More precisely, we consider discrete signals x ∈ IR N , allocate n < N linear measurements of x, and we describe the range of k for which these measurements encode enough information to recover x in the sense of p to the accuracy of best k-term approximation. We also consider the problem of having such accuracy only with high probability.
This is a survey of nonlinear approximation, especially that part of the subject which is important in numerical computation. Nonlinear approximation means that the approximants do not come from linear spaces but rather from nonlinear manifolds. The central question to be studied is what, if any, are the advantages of nonlinear approximation over the simpler, more established, linear methods. This question is answered by studying the rate of approximation which is the decrease in error versus the number of parameters in the approximant. The number of parameters usually correlates well with computational effort. It is shown that in many settings the rate of nonlinear approximation can be characterized by certain smoothness conditions which are significantly weaker than required in the linear theory. Emphasis in the survey will be placed on approximation by piecewise polynomials and wavelets as well as their numerical implementation. Results on highly nonlinear methods such as optimal basis selection and greedy algorithms (adaptive pursuit) are also given. Applications to image processing, statistical estimation, regularity for PDEs, and adaptive algorithms are discussed.
Abstract. This paper is concerned with the construction and analysis of wavelet-based adaptive algorithms for the numerical solution of elliptic equations. These algorithms approximate the solution u of the equation by a linear combination of N wavelets. Therefore, a benchmark for their performance is provided by the rate of best approximation to u by an arbitrary linear combination of N wavelets (so called N -term approximation), which would be obtained by keeping the N largest wavelet coefficients of the real solution (which of course is unknown). The main result of the paper is the construction of an adaptive scheme which produces an approximation to u with error O(N −s ) in the energy norm, whenever such a rate is possible by N -term approximation. The range of s > 0 for which this holds is only limited by the approximation properties of the wavelets together with their ability to compress the elliptic operator. Moreover, it is shown that the number of arithmetic operations needed to compute the approximate solution stays proportional to N . The adaptive algorithm applies to a wide class of elliptic problems and wavelet bases. The analysis in this paper puts forward new techniques for treating elliptic problems as well as the linear systems of equations that arise from the wavelet discretization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.