We present two modified versions of the primal-dual splitting algorithm relying on forward-backward splitting proposed in [21] for solving monotone inclusion problems. Under strong monotonicity assumptions for some of the operators involved we obtain for the sequences of iterates that approach the solution orders of convergence of O( 1 n ) and O(ω n ), for ω ∈ (0, 1), respectively. The investigated primal-dual algorithms are fully decomposable, in the sense that the operators are processed individually at each iteration. We also discuss the modified algorithms in the context of convex optimization problems and present numerical experiments in image processing and support vector machines classification.
We consider the primal problem of finding the zeros of the sum of a maximal monotone operator and the composition of another maximal monotone operator with a linear continuous operator. By formulating its Attouch-Théra-type dual inclusion problem, a primal-dual splitting algorithm which simultaneously solves the two problems in finite-dimensional spaces is presented. The proposed scheme uses at each iteration the resolvents of the maximal monotone operators involved in separate steps and aims to overcome the shortcoming of classical splitting algorithms when dealing with compositions of maximal monotone and linear continuous operators. The iterative algorithm is used for solving nondifferentiable convex optimization problems arising in image processing and in location theory.
Supervised learning methods are powerful techniques to learn a function from a given set of labeled data, the so-called training data. In this paper the support vector machines approach is applied to an image classification task. Starting with the corresponding Tikhonov regularization problem, reformulated as a convex optimization problem, we introduce a conjugate dual problem to it and prove that, whenever strong duality holds, the function to be learned can be expressed via the dual optimal solutions. Corresponding dual problems are then derived for different loss functions. The theoretical results are applied by numerically solving a classification task using high dimensional real-world data in order to obtain optimal classifiers. The results demonstrate the excellent performance of support vector classification for this particular problem.
MSC:47A52, 90C25, 49N15
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.