“…To prove that the lower bound on the α-weight angle exists, we make use of the result derived by Bauer and Householder [3] who extended the Kantorovich inequality described in Theorem 1. The proof for this is investigated by Huang and Zhou [25].…”
Section: Upper Bound On the α-Weight Anglementioning
We investigate and extend the results of Golts and Jones [18] that an alpha-weight angle resulting from unconstrained quadratic portfolio optimisations has an upper bound dependent on the condition number of the covariance matrix. This implies that better conditioned covariance matrices produce weights from unconstrained mean-variance optimisations that are better aligned with each assets expected return. We provide further clarity on the mathematical insights that relate the inequality between the α-weight angle and the condition number and extend the result to include portfolio optimisations with gearing constraints. We provide an extended family of robust optimisations that include the gearing constraints, and discuss their interpretation.
“…To prove that the lower bound on the α-weight angle exists, we make use of the result derived by Bauer and Householder [3] who extended the Kantorovich inequality described in Theorem 1. The proof for this is investigated by Huang and Zhou [25].…”
Section: Upper Bound On the α-Weight Anglementioning
We investigate and extend the results of Golts and Jones [18] that an alpha-weight angle resulting from unconstrained quadratic portfolio optimisations has an upper bound dependent on the condition number of the covariance matrix. This implies that better conditioned covariance matrices produce weights from unconstrained mean-variance optimisations that are better aligned with each assets expected return. We provide further clarity on the mathematical insights that relate the inequality between the α-weight angle and the condition number and extend the result to include portfolio optimisations with gearing constraints. We provide an extended family of robust optimisations that include the gearing constraints, and discuss their interpretation.
“…A particular instance of the problem is when B = A −1 . Then the Legendre-Fenchel transform of f (x) = q A (x) q A −1 (x) would allow us to recover inequalities like that of Kantorovich [Huang05]:…”
Section: Problem 11 the Legendre-fenchel Transform Of The Product Ofmentioning
We present a collection of fourteen conjectures and open problems in the fields of nonlinear analysis and optimization. These problems can be classified into three groups: problems of pure mathematical interest, problems motivated by scientific computing and applications, and problems whose solutions are known but for which we would like to know better proofs. For each problem we provide a succinct presentation, a list of appropriate references, and a view of the state of the art of the subject.
“…This inequality and its variants have many applications in matrix analysis, statistics, numerical algebra, and optimization (see e.g. [7,11,15,16,22,23,25,26,30,32,33,34]). In this paper, K(x) is referred to as the 'Kantorovich function'.…”
The Kantorovich function (x T Ax)(x T A −1 x), where A is a positive definite matrix, is not convex in general. From matrix/convex analysis point of view, it is interesting to address the question: When is this function convex? In this paper, we investigate the convexity of this function by the condition number of its matrix. In 2-dimensional space, we prove that the Kantorovich function is convex if and only if the condition number of its matrix is bounded above by 3 + 2 √ 2, and thus the convexity of the function with two variables can be completely characterized by the condition number. The upper bound '3+2 √ 2' is turned out to be a necessary condition for the convexity of Kantorovich functions in any finite-dimensional spaces. We also point out that when the condition number of the matrix (which can be any dimensional) is less than or equal to 5 + 2 √ 6, the Kantorovich function is convex. Furthermore, we prove that this general sufficient convexity condition can be remarkably improved in 3-dimensional space. Our analysis shows that the convexity of the function is closely related to some modern optimization topics such as the semi-infinite linear matrix inequality or 'robust positive semi-definiteness' of symmetric matrices. In fact, our main result for 3-dimensional cases has been proved by finding an explicit solution range to some semi-infinite linear matrix inequalities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.