Our work considers the optimization of the sum of a non-smooth convex function and a finite family of composite convex functions, each one of which is composed of a convex function and a bounded linear operator. This type of problem is associated with many interesting challenges encountered in the image restoration and image reconstruction fields. We developed a splitting primal-dual proximity algorithm to solve this problem. Further, we propose a preconditioned method, of which the iterative parameters are obtained without the need to know some particular operator norm in advance. Theoretical convergence theorems are presented. We then apply the proposed methods to solve a total variation regularization model, in which the L2 data error function is added to the L1 data error function. The main advantageous feature of this model is its capability to combine different loss functions. The numerical results obtained for computed tomography (CT) image reconstruction demonstrated the ability of the proposed algorithm to reconstruct an image with few and sparse projection views while maintaining the image quality.
Recently, the l p -norm regularization minimization problem (P λ p ) has attracted great attention in compressed sensing. However, the l p -norm x p p in problem (P λ p ) is nonconvex and non-Lipschitz for all p ∈ (0, 1), and there are not many optimization theories and methods are proposed to solve this problem. In fact, it is NP-hard for all p ∈ (0, 1) and λ > 0. In this paper, we study two modified l p regularization minimization problems to approximate the NPhard problem (P λ p ). Inspired by the good performance of Half algorithm and 2/3 algorithm in some sparse signal recovery problems, two iterative thresholding algorithms are proposed to solve the problems (P λ p,1/2,ǫ ) and (P λ p,2/3,ǫ ) respectively. Numerical results show that our algorithms perform effectively in finding the sparse signal in some sparse signal recovery problems for some proper p ∈ (0, 1).
We introduce a preconditioning technique for the first-order primal-dual splitting method. The primal-dual splitting method offers a very general framework for solving a large class of optimization problems arising in image processing. The key idea of the preconditioning technique is that the constant iterative parameters are updated self-adaptively in the iteration process. We also give a simple and easy way to choose the diagonal preconditioners while the convergence of the iterative algorithm is maintained. The efficiency of the proposed method is demonstrated on an image denoising problem. Numerical results show that the preconditioned iterative algorithm performs better than the original one.
In this paper we consider the problem of finding the minimization of the sum of a convex function and the composition of another convex function with a continuous linear operator from the view of fixed point algorithms based on proximity operators. We design a primal-dual fixed point algorithm with dynamic stepsize based on the proximity operator(PDFP 2 O DSn for a n ⊂ (0, 1))and obtain a scheme with a closedform solution for each iteration. Based on Modified Mann iteration and the firmly nonexpansive properties of the proximity operator, we achieve the convergence of the proposed PDFP 2 O DSn algorithm. Moreover, under some stronger assumptions, we can prove the global linear convergence of the proposed algorithm. We also give the connection of the proposed algorithm with other existing first-order methods and fixed point algorithms FP 2 O(Micchelli et al 2011 Inverse Problems 27 45009-38), PDFP 2 O(Chen et al 2013 Inverse Problems 29). Finally, we illustrate the efficiency of PDFP 2 O DSn through some numerical examples on the CT image reconstruction problem. Generally speaking, our method PDFP 2 O DS is comparable with other state-of-the-art methods in numerical performance, while it has some advantages on parameter selection in real applications and converges faster than PDFP 2 O.
Many practical problems in the real world can be formulated as the non-negative ℓ 0-minimisation problems, which seek the sparsest non-negative signals to underdetermined linear equations. They have been widely applied in signal and image processing, machine learning, pattern recognition and computer vision. Unfortunately, this non-negative ℓ 0-minimisation problem is non-deterministic polynomial hard (NP-hard) because of the discrete and discontinuous nature of the ℓ 0-norm. Inspired by the good performances of the fraction function in the authors' former work, in this paper, the authors replace the ℓ 0norm with the non-convex fraction function and study the minimisation problem of the fraction function in recovering the sparse non-negative signal from an underdetermined linear equation. They discuss the equivalence between non-negative ℓ 0minimisation problem and non-negative fraction function minimisation problem, and the equivalence between non-negative fraction function minimisation problem and regularised non-negative fraction function minimisation problem. It is proved that the optimal solution to the non-negative ℓ 0-minimisation problem could be approximately obtained by solving their regularised nonnegative fraction function minimisation problem if some specific conditions are satisfied. Then, they propose a non-negative iterative thresholding algorithm to solve their regularised non-negative fraction function minimisation problem. At last, numerical experiments on some sparse non-negative signal recovery problems are reported.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.