Abstract:Recently proposed Vector Approximate Message Passing (VAMP) demonstrates a great reconstruction potential at solving compressed sensing related linear inverse problems. VAMP provides high per-iteration improvement, can utilize powerful denoisers like BM3D, has rigorously defined dynamics and is able to recover signals sampled by highly undersampled and ill-conditioned linear operators. Yet, its applicability is limited to relatively small problem sizes due to necessity to compute the expensive LMMSE estimator … Show more
“…Thus, we have a well-defined γ t t = γ t and γ τ = 0, τ < t for CG-VAMP. Lastly, the linear update f L with F t (Z t ) in WS-CG-VAMP also fits into the model (72) as shown in Theorem 3 in [20]. Thus, the result (75) holds for MF-OAMP, VAMP, CG-VAMP and WS-CG-VAMP.…”
Section: Appendix Bsupporting
confidence: 54%
“…The first group of algorithms, which we refer as OAMPbased algorithms, constructs f D and f L such that their output errors are asymptotically orthogonal to their input errors [10]. This group of algorithms includes Match Filter OAMP (MF-OAMP) [10], [27], VAMP [9], [24], CG-VAMP [25], [19], [20], WS-CG-VAMP [19] and others. Here the structure of the function…”
Section: A Oamp-based Algorithmsmentioning
confidence: 99%
“…We consider the large scale compressed sensing scenario M < N with a subsampling factor δ = M N = O(1). While there are many first-order iterative methods for recovering x from the set of measurement (1) including [11], [26], [3], [7] and many others, in this work we focus on the family of Scalable Message Passing (SMP) algorithms that includes Approximate Message Passing (AMP) [8], Orthogonal AMP (OAMP) [10], Vector AMP (VAMP) [16], Conjugate Gradient VAMP (CG-VAMP) [19], [25], [20], Warm-Started CG-VAMP (WS-CG-VAMP) [19], Convolutional AMP (CAMP) [23] and others. When the measurement operator A comes from a certain family of random matrices, which may be different for each example of SMP, these algorithms demonstrate high periteration improvement and stable and predictable dynamics.…”
Many modern imaging applications can be modeled as compressed sensing linear inverse problems. When the measurement operator involved in the inverse problem is sufficiently random, denoising Scalable Message Passing (SMP) algorithms have a potential to demonstrate high efficiency in recovering compressed data. One of the key components enabling SMP to achieve fast convergence, stability and predictable dynamics is the Onsager correction that must be updated at each iteration of the algorithm. This correction involves the denoiser's divergence that is traditionally estimated via the Black-Box Monte Carlo (BB-MC) method [14]. While the BB-MC method demonstrates satisfying accuracy of estimation, it requires executing the denoiser additional times at each iteration and might lead to a substantial increase in computational cost of the SMP algorithms. In this work we develop two Large System Limit models of the Onsager correction for denoisers operating within SMP algorithms and use these models to propose two practical classes of divergence estimators that require no additional executions of the denoiser and demonstrate similar or superior correction compared to the BB-MC method.
“…Thus, we have a well-defined γ t t = γ t and γ τ = 0, τ < t for CG-VAMP. Lastly, the linear update f L with F t (Z t ) in WS-CG-VAMP also fits into the model (72) as shown in Theorem 3 in [20]. Thus, the result (75) holds for MF-OAMP, VAMP, CG-VAMP and WS-CG-VAMP.…”
Section: Appendix Bsupporting
confidence: 54%
“…The first group of algorithms, which we refer as OAMPbased algorithms, constructs f D and f L such that their output errors are asymptotically orthogonal to their input errors [10]. This group of algorithms includes Match Filter OAMP (MF-OAMP) [10], [27], VAMP [9], [24], CG-VAMP [25], [19], [20], WS-CG-VAMP [19] and others. Here the structure of the function…”
Section: A Oamp-based Algorithmsmentioning
confidence: 99%
“…We consider the large scale compressed sensing scenario M < N with a subsampling factor δ = M N = O(1). While there are many first-order iterative methods for recovering x from the set of measurement (1) including [11], [26], [3], [7] and many others, in this work we focus on the family of Scalable Message Passing (SMP) algorithms that includes Approximate Message Passing (AMP) [8], Orthogonal AMP (OAMP) [10], Vector AMP (VAMP) [16], Conjugate Gradient VAMP (CG-VAMP) [19], [25], [20], Warm-Started CG-VAMP (WS-CG-VAMP) [19], Convolutional AMP (CAMP) [23] and others. When the measurement operator A comes from a certain family of random matrices, which may be different for each example of SMP, these algorithms demonstrate high periteration improvement and stable and predictable dynamics.…”
Many modern imaging applications can be modeled as compressed sensing linear inverse problems. When the measurement operator involved in the inverse problem is sufficiently random, denoising Scalable Message Passing (SMP) algorithms have a potential to demonstrate high efficiency in recovering compressed data. One of the key components enabling SMP to achieve fast convergence, stability and predictable dynamics is the Onsager correction that must be updated at each iteration of the algorithm. This correction involves the denoiser's divergence that is traditionally estimated via the Black-Box Monte Carlo (BB-MC) method [14]. While the BB-MC method demonstrates satisfying accuracy of estimation, it requires executing the denoiser additional times at each iteration and might lead to a substantial increase in computational cost of the SMP algorithms. In this work we develop two Large System Limit models of the Onsager correction for denoisers operating within SMP algorithms and use these models to propose two practical classes of divergence estimators that require no additional executions of the denoiser and demonstrate similar or superior correction compared to the BB-MC method.
“…Moreover, if the approximation produced by CG is sufficiently bad, then v t,i A→B might even increase with respect to v t−1 A→B . Based on numerical experiments in the current and the previous work [32], we have observed that the number of the inner-loop iterations sufficient to achieve a certain reduction of v t,i A→B per-outer-loop iteration significantly changes with t and therefore motivates the benefit in the adaptive choice of i[t] at each t.…”
Section: Adaptive Cg In Cg-vampmentioning
confidence: 89%
“…Practical estimation of v t,i A→B in CG-VAMP Next, we propose an asymptotically consistent estimator for the variance v t,i A→B that can be naturally implemented within CG and has negligible computational and memory costs. For this, we expand the result (32) and use the identity (18) to obtain v t,i A→B a.s.…”
Section: Stable Implementation Of Cg-vampmentioning
The Recently proposed Vector Approximate Message Passing (VAMP) algorithm demonstrates a great reconstruction potential at solving compressed sensing related linear inverse problems. VAMP provides high per-iteration improvement, can utilize powerful denoisers like BM3D, has rigorously defined dynamics and is able to recover signals measured by highly undersampled and ill-conditioned linear operators. Yet, its applicability is limited to relatively small problem sizes due to the necessity to compute the expensive LMMSE estimator at each iteration. In this work we consider the problem of upscaling VAMP by utilizing Conjugate Gradient (CG) to approximate the intractable LMMSE estimator. We propose a rigorous method for correcting and tuning CG withing CG-VAMP to achieve a stable and efficient reconstruction. To further improve the performance of CG-VAMP, we design a warm-starting scheme for CG and develop theoretical models for the Onsager correction and the State Evolution of Warm-Started CG-VAMP (WS-CG-VAMP). Additionally, we develop robust and accurate methods for implementing the WS-CG-VAMP algorithm. The numerical experiments on large-scale image reconstruction problems demonstrate that WS-CG-VAMP requires much fewer CG iterations compared to CG-VAMP to achieve the same or superior level of reconstruction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.