This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries.This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices {X k , Y k } and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y k . There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k } is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which 1, 000×1, 000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for ℓ 1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
Split Bregman methods introduced in [47] have been demonstrated to be efficient tools to solve total variation (TV) norm minimization problems, which arise from partial differential equation based image restoration such as image denoising and magnetic resonance imaging (MRI) reconstruction from sparse samples. In this paper, we prove the convergence of the split Bregman iterations, where the number of inner iterations is fixed to be one. Furthermore, we show that these split Bregman iterations can be used to solve minimization problems arising from the analysis based approach for image restoration in the literature. We apply these split Bregman iterations to the analysis based image restoration approach whose analysis operator is derived from tight framelets constructed in [59]. This gives a set of new frame based image restoration algorithms that cover several topics in image restorations, such as image denoising, deblurring, inpainting and cartoontexture image decomposition. Several numerical simulation results are provided.
The variational techniques (e.g. the total variation based method) are well established and effective for image restoration, as well as many other applications, while the wavelet frame based approach is relatively new and came from a different school. This paper is designed to establish a connection between these two major approaches for image restoration. The main result of this paper shows that when spline wavelet frames of are used, a special model of a wavelet frame method, called the analysis based approach, can be viewed as a discrete approximation at a given resolution to variational methods. A convergence analysis as image resolution increases is given in terms of objective functionals and their approximate minimizers. This analysis goes beyond the establishment of the connections between these two approaches, since it leads to new understandings for both approaches. First, it provides geometric interpretations to the wavelet frame based approach as well as its solutions. On the other hand, for any given variational model, wavelet frame based approaches provide various and flexible discretizations which immediately lead to fast numerical algorithms for both wavelet frame based approaches and the corresponding variational model. Furthermore, the built-in multiresolution structure of wavelet frames can be utilized to adaptively choose proper differential operators in different regions of a given image according to the order of the singularity of the underlying solutions. This is important when multiple orders of differential operators are used in various models that generalize the total variation based method. These observations will enable us to design new methods according to the problems at hand, hence, lead to wider applications of both the variational and wavelet frame based approaches. Links of wavelet frame based approaches to some more general variational methods developed recently will also be discussed.
Image inpainting is a fundamental problem in image processing and has many applications. Motivated by the recent tight frame based methods on image restoration in either the image or the transform domain, we propose an iterative tight frame algorithm for image inpainting. We consider the convergence of this framelet-based algorithm by interpreting it as an iteration for minimizing a special functional. The proof of the convergence is under the framework of convex analysis and optimization theory. We also discuss the relationship of our method with other wavelet-based methods. Numerical experiments are given to illustrate the performance of the proposed algorithm.
Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One such application is compressed sensing, where an efficient and robust-to-noise algorithm to find a minimal 1 norm solution is needed. This means that the algorithm should be tailored for large scale and completely dense matrices A, while Au and A T u can be computed by fast transforms and the solution we seek is sparse. Recently, a simple and fast algorithm based on linearized Bregman iteration was proposed in [28,32] for this purpose. This paper is to analyze the convergence of linearized Bregman iterations and the minimization properties of their limit. Based on our analysis here, we derive also a new algorithm that is proven to be convergent with a rate. Furthermore, the new algorithm is simple and fast in approximating a minimal 1 norm solution of Au = f as shown by numerical simulations. Hence, it can be used as another choice of an efficient tool in compressed sensing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.