“…Clearly, x t ∈ U for sufficiently small t > 0 and it follows from Remark 2.5 that x t ∈ S. According to y ∈ F( x) + E, ȳ ∈ F(x t ) + E and Condition (iii), we deduce that y ∈ F(x t ) + E, that is, x t ∈ F −1 (y − E). Therefore, (12) holds for sufficiently small t > 0. The combination of ( 11) and ( 12) contradicts the fact that ( x, ȳ) is a strict local optimal point of ϕ-SVOP.…”
Section: Theorem 33mentioning
confidence: 93%
“…The E-optimal solution unifies some known exact and approximate solutions in vector optimization problems. So far, some applications of improvement sets in vector optimization are investigated (see [1][2][3][4][5][6][7][8][9][10][11][12][13] and the references therein). It is worth noticing that Zhao et al [5] used the improvement set to introduce E-optimal solution and weak E-optimal solution of the constrained setvalued optimization problem (for short, SVOP) and established scalarization theorems and Lagrange multiplier theorems of weak E-optimal solution under the assumption of near E-subconvexlikeness.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, the disjunction between the two suitable sets can be proved by verifying that they lie in two disjoint level sets of a suitable separation function. Various linear or nonlinear functions were proposed in [12,13,18,[20][21][22][23][24][25] and then were proved to be weak separation functions or regular weak separation functions or strong separation functions. Recently, Chinaie and Zafarani [26] have proposed two kinds of separation functions, which unify the known linear or nonlinear separation functions.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, Chinaie and Zafarani [26] have proposed two kinds of separation functions, which unify the known linear or nonlinear separation functions. By virtue of ISA, some authors have investigated regularity, duality and optimality conditions for constrained extremum problems (see [12,13,18,[20][21][22][23][24][26][27][28][29][30][31] and the references therein). It is noteworthy that the authors in [12] used the improvement set and the oriented/signed distance function to introduce a nonlinear vector regular weak separation functions and a nonlinear scalar weak separation function, and obtained Lagrangian-type optimality conditions in the sense of vector separation and scalar separation, respectively.…”
Section: Introductionmentioning
confidence: 99%
“…By virtue of ISA, some authors have investigated regularity, duality and optimality conditions for constrained extremum problems (see [12,13,18,[20][21][22][23][24][26][27][28][29][30][31] and the references therein). It is noteworthy that the authors in [12] used the improvement set and the oriented/signed distance function to introduce a nonlinear vector regular weak separation functions and a nonlinear scalar weak separation function, and obtained Lagrangian-type optimality conditions in the sense of vector separation and scalar separation, respectively. In [13], Chen et al also defined a vector-valued regular weak separation function and a scalar weak separation function via the improvement set and obtained some optimality conditions in terms of Eoptimal solution for multiobjective optimization with general constraints.…”
In this paper, we aim at applying improvement sets and image space analysis to investigate scalarizations and optimality conditions of the constrained set-valued optimization problem. Firstly, we use the improvement set to introduce a new class of generalized convex set-valued maps. Secondly, under suitable assumptions, some scalarization results of the constrained set-valued optimization problem are obtained in the sense of (weak) optimal solution characterized by the improvement set. Finally, by considering two classes of nonlinear separation functions, we present the separation between two suitable sets in image space and derive some optimality conditions for the constrained set-valued optimization problem. It shows that the existence of a nonlinear separation is equivalent to a saddle point condition of the generalized Lagrangian set-valued functions.
“…Clearly, x t ∈ U for sufficiently small t > 0 and it follows from Remark 2.5 that x t ∈ S. According to y ∈ F( x) + E, ȳ ∈ F(x t ) + E and Condition (iii), we deduce that y ∈ F(x t ) + E, that is, x t ∈ F −1 (y − E). Therefore, (12) holds for sufficiently small t > 0. The combination of ( 11) and ( 12) contradicts the fact that ( x, ȳ) is a strict local optimal point of ϕ-SVOP.…”
Section: Theorem 33mentioning
confidence: 93%
“…The E-optimal solution unifies some known exact and approximate solutions in vector optimization problems. So far, some applications of improvement sets in vector optimization are investigated (see [1][2][3][4][5][6][7][8][9][10][11][12][13] and the references therein). It is worth noticing that Zhao et al [5] used the improvement set to introduce E-optimal solution and weak E-optimal solution of the constrained setvalued optimization problem (for short, SVOP) and established scalarization theorems and Lagrange multiplier theorems of weak E-optimal solution under the assumption of near E-subconvexlikeness.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, the disjunction between the two suitable sets can be proved by verifying that they lie in two disjoint level sets of a suitable separation function. Various linear or nonlinear functions were proposed in [12,13,18,[20][21][22][23][24][25] and then were proved to be weak separation functions or regular weak separation functions or strong separation functions. Recently, Chinaie and Zafarani [26] have proposed two kinds of separation functions, which unify the known linear or nonlinear separation functions.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, Chinaie and Zafarani [26] have proposed two kinds of separation functions, which unify the known linear or nonlinear separation functions. By virtue of ISA, some authors have investigated regularity, duality and optimality conditions for constrained extremum problems (see [12,13,18,[20][21][22][23][24][26][27][28][29][30][31] and the references therein). It is noteworthy that the authors in [12] used the improvement set and the oriented/signed distance function to introduce a nonlinear vector regular weak separation functions and a nonlinear scalar weak separation function, and obtained Lagrangian-type optimality conditions in the sense of vector separation and scalar separation, respectively.…”
Section: Introductionmentioning
confidence: 99%
“…By virtue of ISA, some authors have investigated regularity, duality and optimality conditions for constrained extremum problems (see [12,13,18,[20][21][22][23][24][26][27][28][29][30][31] and the references therein). It is noteworthy that the authors in [12] used the improvement set and the oriented/signed distance function to introduce a nonlinear vector regular weak separation functions and a nonlinear scalar weak separation function, and obtained Lagrangian-type optimality conditions in the sense of vector separation and scalar separation, respectively. In [13], Chen et al also defined a vector-valued regular weak separation function and a scalar weak separation function via the improvement set and obtained some optimality conditions in terms of Eoptimal solution for multiobjective optimization with general constraints.…”
In this paper, we aim at applying improvement sets and image space analysis to investigate scalarizations and optimality conditions of the constrained set-valued optimization problem. Firstly, we use the improvement set to introduce a new class of generalized convex set-valued maps. Secondly, under suitable assumptions, some scalarization results of the constrained set-valued optimization problem are obtained in the sense of (weak) optimal solution characterized by the improvement set. Finally, by considering two classes of nonlinear separation functions, we present the separation between two suitable sets in image space and derive some optimality conditions for the constrained set-valued optimization problem. It shows that the existence of a nonlinear separation is equivalent to a saddle point condition of the generalized Lagrangian set-valued functions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.