Abstract:In this paper, we propose an extension of the classical Frank-Wolfe method for solving constrained vector optimization problems with respect to a partial order induced by a closed, convex and pointed cone with nonempty interior. In the proposed method, the construction of auxiliary subproblem is based on the well-known oriented distance function. Two types of stepsize strategies including Armijio line search and adaptive stepsize are used. It is shown that every accumulation point of the generated sequences sa… Show more
“…, the (L, C, e)-smoothness reduces to condition (A) in [31]. If C = R + , the (L, C, e)-smoothness and (µ, C, e)-strong convexity correspond to relative L-smoothness and µ-strong convexity in [21], respectively.…”
Section: Relative Smoothness and Relative Strong Convexitymentioning
In recent years, by using Bregman distance, the Lipschitz gradient continuity and strong convexity were lifted and replaced by relative smoothness and relative strong convexity. Under the mild assumptions, it was proved that gradient methods with Bregman regularity converge linearly for single-objective optimization problems (SOPs). In this paper, we extend the relative smoothness and relative strong convexity to vector-valued functions and analyze the convergence of an interior Bregman gradient method for vector optimization problems (VOPs). Specifically, the global convergence rates are O( 1 k ) and O(r k )(0 < r < 1) for convex and relative strongly convex VOPs, respectively. Moreover, the proposed method converges linearly for VOPs that satisfy a vector Bregman-PL inequality.
“…, the (L, C, e)-smoothness reduces to condition (A) in [31]. If C = R + , the (L, C, e)-smoothness and (µ, C, e)-strong convexity correspond to relative L-smoothness and µ-strong convexity in [21], respectively.…”
Section: Relative Smoothness and Relative Strong Convexitymentioning
In recent years, by using Bregman distance, the Lipschitz gradient continuity and strong convexity were lifted and replaced by relative smoothness and relative strong convexity. Under the mild assumptions, it was proved that gradient methods with Bregman regularity converge linearly for single-objective optimization problems (SOPs). In this paper, we extend the relative smoothness and relative strong convexity to vector-valued functions and analyze the convergence of an interior Bregman gradient method for vector optimization problems (VOPs). Specifically, the global convergence rates are O( 1 k ) and O(r k )(0 < r < 1) for convex and relative strongly convex VOPs, respectively. Moreover, the proposed method converges linearly for VOPs that satisfy a vector Bregman-PL inequality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.