This paper is an expository devoted to an important class of real-valued functions introduced by Löwner, namely, operator monotone functions. This concept is closely related to operator convex/concave functions. Various characterizations for such functions are given from the viewpoint of differential analysis in terms of matrix of divided differences. From the viewpoint of operator inequalities, various characterizations and the relationship between operator monotonicity and operator convexity are given by Hansen and Pedersen. In the viewpoint of measure theory, operator monotone functions on the nonnegative reals admit meaningful integral representations with respect to Borel measures on the unit interval. Furthermore, Kubo-Ando theory asserts the correspondence between operator monotone functions and operator means.
Let B(H) be the space of all bounded linear operators on a complex separable Hilbert space H. Bohr inequality for Hilbert space operators asserts that for A, B ∈ B(H) and p, q > 1 real numbers such that 1/p + 1/q = 1, |A + B| 2 p| A| 2 + q|B| 2 with equality if and only if B = (p − 1) A. In this paper, a number of generalizations of Bohr inequality for operators in B(H) are established. Moreover, Bohr inequalities are extended to multiple operators and some related inequalities are obtained. The results in this paper generalize results known so far. The idea of transforming problems in operator theory to problems in matrix theory, which are easy to handle, is the key role.
We introduce the notion of Khatri-Rao product for operator matrices acting on the direct sum of Hilbert spaces. This notion generalizes the tensor product and Hadamard product of operators and the Khatri-Rao product of matrices. We investigate algebraic properties, positivity, and monotonicity of the Khatri-Rao product. Moreover, there is a unital positive linear map taking Tracy-Singh products to Khatri-Rao products via an isometry.
We derive an iterative procedure for solving a generalized Sylvester matrix equation $AXB+CXD = E$
A
X
B
+
C
X
D
=
E
, where $A,B,C,D,E$
A
,
B
,
C
,
D
,
E
are conforming rectangular matrices. Our algorithm is based on gradients and hierarchical identification principle. We convert the matrix iteration process to a first-order linear difference vector equation with matrix coefficient. The Banach contraction principle reveals that the sequence of approximated solutions converges to the exact solution for any initial matrix if and only if the convergence factor belongs to an open interval. The contraction principle also gives the convergence rate and the error analysis, governed by the spectral radius of the associated iteration matrix. We obtain the fastest convergence factor so that the spectral radius of the iteration matrix is minimized. In particular, we obtain iterative algorithms for the matrix equation $AXB=C$
A
X
B
=
C
, the Sylvester equation, and the Kalman–Yakubovich equation. We give numerical experiments of the proposed algorithm to illustrate its applicability, effectiveness, and efficiency.
The geometry on a slope of a mountain is the geometry of a Finsler metric, called here the slope metric. We study the existence of globally defined slope metrics on surfaces of revolution as well as the geodesic's behavior. A comparison between Finslerian and Riemannian areas of a bounded region is also studied.
We propose a new iterative method for solving a generalized Sylvester matrix equation A1XA2+A3XA4=E with given square matrices A1,A2,A3,A4 and an unknown rectangular matrix X. The method aims to construct a sequence of approximated solutions converging to the exact solution, no matter the initial value is. We decompose the coefficient matrices to be the sum of its diagonal part and others. The recursive formula for the iteration is derived from the gradients of quadratic norm-error functions, together with the hierarchical identification principle. We find equivalent conditions on a convergent factor, relied on eigenvalues of the associated iteration matrix, so that the method is applicable as desired. The convergence rate and error estimation of the method are governed by the spectral norm of the related iteration matrix. Furthermore, we illustrate numerical examples of the proposed method to show its capability and efficacy, compared to recent gradient-based iterative methods.
We investigate a system of coupled non-homogeneous linear matrix differential equations. By applying the diagonal extraction operator, this system is reduced to a simple vector-matrix differential equation. An explicit formula of the general solution is then obtained in terms of matrix convolution product, Hadamard product, and elementary matrix functions. Moreover, we discuss certain special cases of the main system when initial conditions are imposed.
We introduce an effective iterative method for solving rectangular linear systems, based on gradients along with the steepest descent optimization. We show that the proposed method is applicable with any initial vectors as long as the coefficient matrix is of full column rank. Convergence analysis produces error estimates and the asymptotic convergence rate of the algorithm, which is governed by the term $\sqrt {1-\kappa^{-2}}$1−κ−2, where κ is the condition number of the coefficient matrix. Moreover, we apply the proposed method to a sparse linear system arising from a discretization of the one-dimensional Poisson equation. Numerical simulations illustrate the capability and effectiveness of the proposed method in comparison to the well-known and recent methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.