2008
DOI: 10.7153/jmi-02-05
|View full text |Cite
|
Sign up to set email alerts
|

Characterizations of convex functions of a vector variable via Hermite-Hadamard's inequality

Abstract: Abstract. The classical Hermite-Hadamard inequality characterizes the continuous convex functions of one real variable. The aim of the present paper is to give an analogous characterization for functions of a vector variable. The Hermite-Hadamard inequalityIn a letter sent on November 22, 1881, to the journal Mathesis (and published there two years later), Ch. Hermite [10] noted that every convex function f :The left-hand side inequality was rediscovered ten years later by J. Hadamard [7]. Nowadays, the double… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 15 publications
(9 reference statements)
0
9
0
Order By: Relevance
“…Improvements of the Hermite-Hadamard inequality for univariate convex functions were obtained in [11]. As for the Hermite-Hadamard inequality for multivariate convex functions, one may refer to [2, 4, 5, 12–16], and [17]. …”
Section: Convex Functions On the Simplexmentioning
confidence: 99%
“…Improvements of the Hermite-Hadamard inequality for univariate convex functions were obtained in [11]. As for the Hermite-Hadamard inequality for multivariate convex functions, one may refer to [2, 4, 5, 12–16], and [17]. …”
Section: Convex Functions On the Simplexmentioning
confidence: 99%
“… is given and , to obtain the optimal solution of the NLP problem (6). We note that, for sufficiently number m , the optimal solution of the NLP problem (6) is an appro ximate optimal solution for the NLP problem (2).…”
Section: Main Ideamentioning
confidence: 99%
“…Assume (see [6,7]). Here, by assumption (where M is a sufficiently big number), the infinite dimensional nonsmooth optimization problem (11) can be approximated to the follo wing finite d imensional smooth problem (see [12]): (12) Further, in [11,12] , to obtain the optimal solution of the NLP problem (15), and then obtain an approximate optimal solution for the nonsmooth optimization problem (2) to determine the validity of nonsmooth inequality (1).…”
Section: B Linearization Approach In Order To Solve the Nonsmooth Opmentioning
confidence: 99%
See 1 more Smart Citation
“…We extend our . The reader is referred to [5] and [6] for the original papers on Hadamard's inequality, and [1], [4], [7], [8], [9], [10] or [13] for some new developments on this topic.…”
Section: Introductionmentioning
confidence: 99%