Abstract. Inspired by significant real-life applications, in particular, sparse phase retrieval and sparse pulsation frequency detection in Asteroseismology, we investigate a general framework for compressed sensing, where the measurements are quasi-linear. We formulate natural generalizations of the well-known Restricted Isometry Property (RIP) towards nonlinear measurements, which allow us to prove both unique identifiability of sparse signals as well as the convergence of recovery algorithms to compute them efficiently. We show that for certain randomized quasi-linear measurements, including Lipschitz perturbations of classical RIP matrices and phase retrieval from random projections, the proposed restricted isometry properties hold with high probability. We analyze a generalized Orthogonal Least Squares (OLS) under the assumption that magnitudes of signal entries to be recovered decay fast. Greed is good again, as we show that this algorithm performs efficiently in phase retrieval and Asteroseismology. For situations where the decay assumption on the signal does not necessarily hold, we propose two alternative algorithms, which are natural generalizations of the well-known iterative hard and soft-thresholding. While these algorithms are rarely successful for the mentioned applications, we show their strong recovery guarantees for quasi-linear measurements which are Lipschitz perturbations of RIP matrices.Key words. compressed sensing, restricted isometry property, greedy algorithm, quasi-linear, iterative thresholding AMS subject classifications. 94A20, 47J25, 15B521. Introduction. Compressed sensing addresses the problem of recovering nearly-sparse signals from vastly incomplete measurements [11,12,14,15,21]. By using the prior assumptions on the signal, the number of measurements can be well below the Shannon sampling rate and effective reconstruction algorithms are available. The standard compressed sensing approach deals with linear measurements. The success of signal recovery algorithms often relies on the so-called Restricted Isometry Property (RIP) [12,15,27,35,38,39], which is a near-identity spectral property of small submatrices of the measurement Gramian. The RIP condition is satisfied with high probability and nearly optimal number of measurements for a large class of random measurements [3,4,14,35,38], which explains the popularity of all sorts of random sensing approaches. The most effective recovery algorithms are based either on a greedy approach or on variational models, such as 1 -norm minimization, leading to suitable iterative thresholded gradient descent methods. In the literature of mathematical signal processing, greedy algorithms for sparse recovery originate from the so-called Matching Pursuit [33], although several predecessors were well-known in other communities. Among astronomers and asteroseismologists, for instance, Orthogonal Least Squares (OLS) [31] was already in use in the '60s for the detection of significant frequencies of star light-spectra (the so-called prewhitening) [...
We propose a new iteratively reweighted least squares (IRLS) algorithm for the recovery of a matrix X ∈ C d 1 ×d 2 of rank r min(d 1 , d 2 ) from incomplete linear observations, solving a sequence of low complexity linear problems. e easily implementable algorithm, which we call harmonic mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex Scha en-p quasi-norm penalization to promote low-rankness and carries three major strengths, in particular for the matrix completion se ing. First, we observe a remarkable global convergence behavior of the algorithm's iterates to the low-rank matrix for relevant, interesting cases, for which any other state-of-the-art optimization approach fails the recovery. Secondly, HM-IRLS exhibits an empirical recovery probability close to 1 even for a number of measurements very close to the theoretical lower bound r (d 1 + d 2 − r ), i.e., already for signi cantly fewer linear observations than any other tractable approach in the literature. irdly, HM-IRLS exhibits a locally superlinear rate of convergence (of order 2 − p) if the linear observations ful ll a suitable null space property. While for the rst two properties we have so far only strong empirical evidence, we prove the third property as our main theoretical result.
In this paper we address the numerical solution of minimal norm residuals of nonlinear equations in finite dimensions. We take particularly inspiration from the problem of finding a sparse vector solution of phase retrieval problems by using greedy algorithms based on iterative residual minimizations in the ℓp-norm, for 1 ≤ p ≤ 2. Due to the mild smoothness of the problem, especially for p → 1, we develop and analyze a generalized version of Iteratively Reweighted Least Squares (IRLS). This simple and efficient algorithm performs the solution of optimization problems involving non-quadratic possibly non-convex and non-smooth cost functions, which can be transformed into a sequence of common least squares problems, to be tackled eventually by more efficient numerical optimization methods. While its analysis has been by now developed in many different contexts (e.g., for sparse vector, low-rank matrix optimization, and for the solution of PDE involving p-Laplacians) when the model equation is linear, no results are up to now provided in case of nonlinear ones. We address here precisely the convergence and the rate of error decay of IRLS for such nonlinear problems. The analysis of the convergence of the algorithm is based on its reformulation as an alternating minimization of an energy functional, whose main variables are the competitors to solutions of the intermediate reweighted least squares problems and their weights. Under a specific condition of coercivity often verified in practice and assumptions of local convexity, we are able to show convergence of IRLS to minimizers of the nonlinear residual problem. For the case where we are lacking the local convexity, we propose an appropriate convexification by quadratic perturbations. Eventually we are able to show convergence of this modified procedure to at least a very good approximation of stationary points of the original problem. In order to illustrate the theoretical results we conclude the paper with several numerical experiments. We compare IRLS with standard Matlab optimization functions for a simple and easily presentable example and furthermore numerically validate our theoretical results in the more complicated framework of phase retrieval problems, which are our main motivation. Finally we examine the recovery capability of the algorithm in the context of data corrupted by impulsive noise where the sparsification of the residual is desired.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.