We consider the vector optimization problem of finding weakly efficient points for maps from a Hilbert space X to a Banach space Y , with respect to the partial order induced by a closed, convex, and pointed cone C ⊂ Y with a nonempty interior. We develop for this problem an extension of the proximal point method for scalar-valued convex optimization. In this extension, the subproblems consist of finding weakly efficient points for suitable regularizations of the original map. We present both an exact and an inexact version, in which the subproblems are solved only approximately, within a constant relative tolerance. In both cases, we prove weak convergence of the generated sequence to a weakly efficient point, assuming convexity of the map with respect to C and C-completeness of the initial section. In cases where this last assumption fails, we still establish that the generating sequence is, in a suitable sense, a minimizing one. We also exhibit a particular instance of the algorithm for which, under a mild hypothesis on C, the weak limit of the generated sequence is an efficient, rather than a weakly efficient, point.
Introduction.We discuss methods for vector-valued optimization in the following setting. We consider maps F : X → Y , where X is a real Hilbert space and Y is a real Banach space containing a closed, convex, and pointed cone C with nonempty interior, which defines a partial order C in Y , given by y C y if and only if y − y belongs to C, with its associate relation ≺ C , given by y ≺ C y if and only if y − y belongs to the interior int(C) of C. Actually, we admit the possibility that F takes the value +∞; this is made precise in section 2. Our goal is to analyze methods for finding a weakly efficient minimizer of F with respect to C , meaning a point a ∈ X such that there exists no x ∈ X satisfying F (x) ≺ C F (a). This paper is part of a wider research program consisting of the extension to vector-valued optimization of several iterative methods for scalar-valued methods. In these extensions, we attempt to define the iterates in the vector-valued case by considering the order C in Y , mimicking, whenever possible, the role of the usual order in R in the corresponding algorithm for scalar-valued optimization.Several methods already have been extended in this fashion in a finite dimensional setting. The case of the steepest descent method for multiobjective optimization (i.e., when C is the nonnegative orthant of R n ) was dealt with in [7]; the same method for general finite dimensional vector optimization (i.e., for partial orders given by rather general cones in R n ) is analyzed in [10]. An extension of the projected gradient method to the case of convexly constrained vector optimization, for the order given by a rather general cone in R n , can be found in [9].