“…Now observe that, according to (32) and (29), we have that for each solution of ( ) there exists a sequence…”
Section: Convergence and Stability Of A Regularization Methods For Maxmentioning
confidence: 99%
“…v ∈ Ω for all k ∈ , and satisfies (32). To this end, note that each is a nonempty, closed and convex subset of…”
Section: Convergence and Stability Of A Regularization Methods For Maxmentioning
confidence: 99%
“…According to (68), condition (32) holds too. These show that Theorem 3.2 is applicable to F and to the functions k F .…”
Section: Convergence and Stability Of A Regularization Methods For Maxmentioning
confidence: 99%
“…It is an interesting open problem to evaluate the rate of convergence of the regularization scheme discussed in this work in a way similar to that in which such rates were evaluated for alternative regularization methods by Kaplan and Tichatschke [34], [33], [32] and [42]. Such an evaluation may help decide for which type of problems and in which settings application of the regularization scheme (4) is efficient.…”
Abstract:In this paper we study the stability and convergence of a regularization method for solving inclusions f Ax ∈ k , where A is a maximal monotone point-to-set operator from a reflexive smooth Banach space X with the Kadec-Klee property to its dual. We assume that the data A and f involved in the inclusion are given by approximations A and k f converging to A and f, respectively, in the sense of Mosco type topologies. We prove that the sequence 1 ( )f which results from the regularization process converges weakly and, under some conditions, converges strongly to the minimum norm solution of the inclusion f Ax ∈ , provided that the inclusion is consistent.These results lead to a regularization procedure for perturbed convex optimization problems whose objective functions and feasibility sets are given by approximations. In particular, we obtain a strongly convergent version of the generalized proximal point optimization algorithm which is applicable to problems whose feasibility sets are given by Mosco approximations
“…Now observe that, according to (32) and (29), we have that for each solution of ( ) there exists a sequence…”
Section: Convergence and Stability Of A Regularization Methods For Maxmentioning
confidence: 99%
“…v ∈ Ω for all k ∈ , and satisfies (32). To this end, note that each is a nonempty, closed and convex subset of…”
Section: Convergence and Stability Of A Regularization Methods For Maxmentioning
confidence: 99%
“…According to (68), condition (32) holds too. These show that Theorem 3.2 is applicable to F and to the functions k F .…”
Section: Convergence and Stability Of A Regularization Methods For Maxmentioning
confidence: 99%
“…It is an interesting open problem to evaluate the rate of convergence of the regularization scheme discussed in this work in a way similar to that in which such rates were evaluated for alternative regularization methods by Kaplan and Tichatschke [34], [33], [32] and [42]. Such an evaluation may help decide for which type of problems and in which settings application of the regularization scheme (4) is efficient.…”
Abstract:In this paper we study the stability and convergence of a regularization method for solving inclusions f Ax ∈ k , where A is a maximal monotone point-to-set operator from a reflexive smooth Banach space X with the Kadec-Klee property to its dual. We assume that the data A and f involved in the inclusion are given by approximations A and k f converging to A and f, respectively, in the sense of Mosco type topologies. We prove that the sequence 1 ( )f which results from the regularization process converges weakly and, under some conditions, converges strongly to the minimum norm solution of the inclusion f Ax ∈ , provided that the inclusion is consistent.These results lead to a regularization procedure for perturbed convex optimization problems whose objective functions and feasibility sets are given by approximations. In particular, we obtain a strongly convergent version of the generalized proximal point optimization algorithm which is applicable to problems whose feasibility sets are given by Mosco approximations
“…Proceeding from the general framework (2) and the convergence results in [22,25,26,27,28], we revise here some ideas originally developed for the improvement of certain proximal-like methods and applications to some classes of problems. On this way, these ideas can be extended to a wide class of proximal methods as well as to the APP.…”
Summary. We discuss some ideas for improvement, extension and appUcation of proximal point methods and the auxihary problem principle to variational inequalities in Hilbert spaces. These methods are closely related and will be joined in a general framework, which admits a consecutive approximation of the problem data including applications of finite element techniques and the ^-enlargement of monotone operators. With the use of a "reserve of monotonicity" of the operator in the variational inequality, the concepts of weak-and elliptic proximal regularization are developed. Considering Bregman-function-based proximal methods, we analyze their convergence under a relaxed error tolerance criterion in the subproblems. Moreover, the case of variational inequalities with non-paramonotone operators is investigated, and an extension of the auxiliary problem principle with the use of Bregman functions is studied. To emphasize the basic ideas, we renounce all the proofs and sometimes precise descriptions of the convergence results and approximation techniques. Those can be found in the referred papers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.