We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion.
We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix.In a favorable situation, the unknown matrix, which consists of the jointly sparse signals, has linearly independent nonzero rows.In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally proposed by Schmidt for the direction of arrival problem in sensor array processing and later proposed and analyzed for joint sparse recovery by Feng and Bresler, provides a guarantee with the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank-defect or ill-conditioning. This situation arises with limited number of measurement vectors, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing methods can consistently approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC), which improves on MUSIC so that the support is reliably recovered under such unfavorable conditions. Combined with subspace-based greedy algorithms also proposed and analyzed in this paper, SA-MUSIC provides a computationally efficient algorithm with a performance guarantee. The performance guarantees are given in terms of a version of restricted isometry property. In particular, we also present a non-asymptotic perturbation analysis of the signal subspace estimation that has been missing in the previous study of MUSIC. Index TermsCompressed sensing, joint sparsity, multiple measurement vectors (MMV), subspace estimation, restricted isometry property (RIP), sensor array processing, spectrum-blind sampling.
Compressed sensing of simultaneously sparse and low-rank matrices enables recovery of sparse signals from a few linear measurements of their bilinear form. One important question is how many measurements are needed for a stable reconstruction in the presence of measurement noise. Unlike conventional compressed sensing for sparse vectors, where convex relaxation via the ℓ 1 -norm achieves near optimal performance, for compressed sensing of sparse low-rank matrices, it has been shown recently[2] that convex programmings using the nuclear norm and the mixed norm are highly suboptimal even in the noise-free scenario.We propose an alternating minimization algorithm called sparse power factorization (SPF) for compressed sensing of sparse rank-one matrices. For a class of signals whose sparse representation coefficients are fast-decaying, SPF achieves stable recovery of the rank-1 matrix formed by their outer product and requires number of measurements within a logarithmic factor of the information-theoretic fundamental limit. For the recovery of general sparse low-rank matrices, we propose subspace-concatenated SPF (SCSPF), which has analogous near optimal performance guarantees to SPF in the rank-1 case. Numerical results show that SPF and SCSPF empirically outperform convex programmings using the best known combinations of mixed norm and nuclear norm.
Abstract-While the recent theory of compressed sensing provides an opportunity to overcome the Nyquist limit in recovering sparse signals, a solution approach usually takes the form of an inverse problem of an unknown signal, which is crucially dependent on specific signal representation. In this paper, we propose a drastically different two-step Fourier compressive sampling framework in a continuous domain that can be implemented via measurement domain interpolation, after which signal reconstruction can be done using classical analytic reconstruction methods. The main idea originates from the fundamental duality between the sparsity in the primary space and the low-rankness of a structured matrix in the spectral domain, showing that a low-rank interpolator in the spectral domain can enjoy all of the benefits of sparse recovery with performance guarantees. Most notably, the proposed low-rank interpolation approach can be regarded as a generalization of recent spectral compressed sensing to recover large classes of finite rate of innovations (FRI) signals at a near-optimal sampling rate. Moreover, for the case of cardinal representation, we can show that the proposed low-rank interpolation scheme will benefit from inherent regularization and an optimal incoherence parameter. Using a powerful dual certificate and the golfing scheme, we show that the new framework still achieves a near-optimal sampling rate for a general class of FRI signal recovery, while the sampling rate can be further reduced for a class of cardinal splines. Numerical results using various types of FRI signals confirm that the proposed low-rank interpolation approach offers significantly better phase transitions than conventional compressive sampling approaches.Index Terms-Compressed sensing, signals of finite rate of innovations, spectral compressed sensing, low rank matrix completion, dual certificates, golfing scheme.
Subsampled blind deconvolution is the recovery of two unknown signals from samples of their convolution. To overcome the ill-posedness of this problem, solutions based on priors tailored to specific application have been developed in practical applications. In particular, sparsity models have provided promising priors. However, in spite of empirical success of these methods in many applications, existing analyses are rather limited in two main ways: by disparity between the theoretical assumptions on the signal and/or measurement model versus practical setups; or by failure to provide a performance guarantee for parameter values within the optimal regime defined by the information theoretic limits. In particular, it has been shown that a naive sparsity model is not a strong enough prior for identifiability in the blind deconvolution problem.Instead, in addition to sparsity, we adopt a conic constraint, which enforces spectral flatness of the signals. Under this prior, we provide an iterative algorithm that achieves guaranteed performance in blind deconvolution at near optimal sample complexity. Numerical results show the empirical performance of the iterative algorithm agrees with the performance guarantee.
Blind deconvolution (BD), the resolution of a signal and a filter given their convolution, arises in many applications. Without further constraints, BD is ill-posed. In practice, subspace or sparsity constraints have been imposed to reduce the search space, and have shown some empirical success. However, existing theoretical analysis on uniqueness in BD is rather limited. As an effort to address the still mysterious question, we derive sufficient conditions under which two vectors can be uniquely identified from their circular convolution, subject to subspace or sparsity constraints. These sufficient conditions provide the first algebraic sample complexities for BD. We first derive a sufficient condition that applies to almost all bases or frames. For blind deconvolution of vectors in C n , with two subspace constraints of dimensions m 1 and m 2 , the required sample complexity is n ≥ m 1 m 2 . Then we impose a sub-band structure on one basis, and derive a sufficient condition that involves a relaxed sample complexity n ≥ m 1 + m 2 − 1, which we show to be optimal. We present the extensions of these results to BD with sparsity constraints or mixed constraints, with the sparsity level replacing the subspace dimension. The cost for the unknown support in this case is an extra factor of 2 in the sample complexity. IntroductionBlind deconvolution (BD) is the bilinear inverse problem of recovering the signal and the filter simultaneously given the their convolutioin or circular convolution. It arises in many applications, including blind image deblurring [2], blind channel equalization [3], speech dereverberation [4], and seismic data analysis [5]. Without further constraints, BD is an ill-posed problem, and does not yield a unique solution. A variety of constraints have been introduced to exploit the properties of natural signals and reduce the search space. Examples of such constraints include positivity (the signals are non-negative), subspace constraint (the signals reside in a lower-dimensional subspace) and sparsity (the signals are sparse over some dictionary). In this paper, we focus on subspace or sparsity constraints, which can be imposed on both the signal and the filter. Consider the example of blind image deblurring: a natural image can be considered sparse over the a wavelet dictionary or the discrete cosine transform (DCT) dictionary. The support of the point spread function (PSF) is usually significantly smaller than the image itself. Therefore the filter resides in a lower-dimensional subspace. These priors serve as constraints or regularizers [6][7][8][9][10]. With a reduced search space, BD can be better-posed. However, despite the success in practice, the theoretical results on the uniqueness in BD with a subspace or sparsity constraint are limited.Early works on the identifiability in blind deconvolution studied multichannel blind deconvolution with finite impulse response (FIR) models [11,12], in which sparsity was not considered. For single channel blind deconvolution, sparsity was imposed as a ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.