In this paper, we develop a new framework for sensing and recovering structured signals. In contrast to compressive sensing (CS) systems that employ linear measurements, sparse representations, and computationally complex convex/greedy algorithms, we introduce a deep learning framework that supports both linear and mildly nonlinear measurements, that learns a structured representation from training data, and that efficiently computes a signal estimate. In particular, we apply a stacked denoising autoencoder (SDA), as an unsupervised feature learner. SDA enables us to capture statistical dependencies between the different elements of certain signals and improve signal recovery performance as compared to the CS approach.
The promise of compressive sensing (CS) has been offset by two significant challenges. First, real-world data is not exactly sparse in a fixed basis. Second, current highperformance recovery algorithms are slow to converge, which limits CS to either non-real-time applications or scenarios where massive back-end computing is available. In this paper, we attack both of these challenges head-on by developing a new signal recovery framework we call DeepInverse that learns the inverse transformation from measurement vectors to signals using a deep convolutional network. When trained on a set of representative images, the network learns both a representation for the signals (addressing challenge one) and an inverse map approximating a greedy or convex recovery algorithm (addressing challenge two). Our experiments indicate that the DeepInverse network closely approximates the solution produced by state-of-the-art CS recovery algorithms yet is hundreds of times faster in run time. The tradeoff for the ultrafast run time is a computationally intensive, off-line training procedure typical to deep networks. However, the training needs to be completed only once, which makes the approach attractive for a host of sparse recovery problems.
We consider the problem of recovering a vector βo ∈ R p from n random and noisy linear observations y = Xβo + w, where X is the measurement matrix and w is noise. The LASSO estimate is given by the solution to the optimization problemβ λ = arg min β 1 2 y − Xβ 2 2 + λ β 1 . Among the iterative algorithms that have been proposed for solving this optimization problem, approximate message passing (AMP) has attracted attention for its fast convergence. The iterations of AMP are given bywhere β t , z t , and I t denote estimates of βo, y −Xβo, and the active set of βo at iteration t respectively. η(x i ; τ ) = (|x i | − τ ) + sign(x i ) denotes the soft thresholding function with threshold parameter τ . Despite significant progress in the theoretical analysis of the estimates of LASSO and AMP, little is known about their behavior as a function of the regularization parameter λ, or the thereshold parameters τ t . For instance the following basic questions have not yet been studied in the literature: (i) How does the size of the active set βλ 0 /p behave as a function of λ? (ii) How does the mean square error β λ − βo 2 2 /p behave as a function of λ? (iii) How does β t − βo 2 2 /p behave as a function of τ 1 , . . . , τ t−1 ? Answering these questions will help in addressing practical challenges regarding the optimal tuning of λ or τ 1 , τ 2 , . . .. This paper answers these questions in the asymptotic setting (i.e., where p → ∞, n → ∞ while the ratio n/p converges to a fixed number in (0, 1)) and shows how these results can be employed in deriving simple and theoretically optimal approaches for tuning the parameters τ 1 , . . . , τ t for AMP or λ for LASSO. It also explores the connection between the optimal tuning of the parameters of AMP and the optimal tuning of LASSO.MSC 2010 subject classifications: 62G05, 62J05.
In this paper we develop a novel computational sensing framework for sensing and recovering structured signals. When trained on a set of representative signals, our framework learns to take undersampled measurements and recover signals from them using a deep convolutional neural network. In other words, it learns a transformation from the original signals to a near-optimal number of undersampled measurements and the inverse transformation from measurements to signals. This is in contrast to traditional compressive sensing (CS) systems that use random linear measurements and convex optimization or iterative algorithms for signal recovery. We compare our new framework with ℓ1-minimization from the phase transition point of view and demonstrate that it outperforms ℓ1-minimization in the regions of phase transition plot where ℓ1-minimization cannot recover the exact solution. In addition, we experimentally demonstrate how learning measurements enhances the overall recovery performance, speeds up training of recovery framework, and leads to having fewer parameters to learn.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.