Many applications arising in a variety of fields can be well illustrated by the task of recovering the low-rank and sparse components of a given matrix. Recently, it is discovered that this NP-hard task can be well accomplished, both theoretically and numerically, via heuristically solving a convex relaxation problem where the widely-acknowledged nuclear norm and l 1 norm are utilized to induce low-rank and sparsity. In the literature, it is conventionally assumed that all entries of the matrix to be recovered are exactly known (via observation). To capture even more applications, this paper studies the recovery task in more general settings: only a fraction of entries of the matrix can be observed and the observation is corrupted by both impulsive and Gaussian noise. The resulted model falls into the applicable scope of the classical augmented Lagrangian method. Moreover, the separable structure of the new model enables us to solve the involved subproblems more efficiently by splitting the augmented Lagrangian function. Hence, some implementable numerical algorithms are developed in the spirits of the well-known alternating direction method and the parallel splitting augmented Lagrangian method. Some preliminary numerical experiments verify that these augmented-Lagrangian-based algorithms are easily-implementable and surprisingly-efficient for tackling the new recovery model.
Abstract. The nuclear norm is widely used to induce low-rank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arising from various applications, provided that the resulting subproblems are sufficiently simple to have closed-form solutions. In this paper, we are interested in the application of the ALM and the ADM for some nuclear norm involved minimization problems. When the resulting subproblems do not have closed-form solutions, we propose to linearize these subproblems such that closed-form solutions of these linearized subproblems can be easily derived. Global convergence of these linearized ALM and ADM are established under standard assumptions. Finally, we verify the effectiveness and efficiency of these new methods by some numerical experiments.
In this paper, we study alternating direction methods for solving constrained total-variation image restoration and reconstruction problems. Alternating direction methods can be implementable variants of the classical augmented Lagrangian method for optimization problems with separable structures and linear constraints. The proposed framework allows us to solve problems of image restoration, impulse noise removal, inpainting and image cartoon+texture decomposition. As the constrained model is employed, we only need to input the noise level and the estimation of the regularization parameter is not required in these imaging problems. Experimental results for such imaging problems are presented to illustrate the eectiveness of the proposed method. We show that the alternating direction method is very ecient for solving image restoration and reconstruction problems. In these equations n is the pixels number. ∇ : R n → R n × R n is a discrete version of the gradient. |||•||| 1 represents a norm on R n × R n. ||•|| N is a norm or a semi-norm on R m. A : R n → R m is a linear transform. α and τ are positive real numbers which measure the trade-o between the t to x 0 and the amount of regularization. Content of the paper A number of numerical methods have been proposed for solving instances of problem (2), see e.g.
Abstract. The total variation (TV) model is attractive for being able to preserve sharp attributes in images. However, the restored images from TV-based methods do not usually stay in a given dynamic range, and hence projection is required to bring them back into the dynamic range for visual presentation or for storage in digital media. This will affect the accuracy of the restoration as the projected image will no longer be the minimizer of the given TV model. In this paper, we show that one can get much more accurate solutions by imposing box constraints on the TV models and solving the resulting constrained models. Our numerical results show that for some images where there are many pixels with values lying on the boundary of the dynamic range, the gain can be as great as 10.28dB in peak signal-to-noise ratio. One traditional hinderance of using the constrained model is that it is difficult to solve. However, in this paper, we propose to use the alternating direction method of multipliers (ADMM) to solve the constrained models. This leads to a fast and convergent algorithm that is applicable for both Gaussian and impulse noise. Numerical results show that our ADMM algorithm is better than some state-of-the-art algorithms for unconstrained models both in terms of accuracy and robustness with respect to the regularization parameter.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.