In the present paper, we consider large-scale continuous-time differential matrix Riccati equations. To the authors' knowledge, the two main approaches proposed in the litterature are based on a splitting scheme or on a Rosenbrock / Backward Differentiation Formula (BDF) methods. The approach we propose is based on the reduction of the problem dimension prior to integration. We project the initial problem onto an extended block Krylov subspace and obtain a low-dimensional differential matrix Riccati equation. The latter matrix differential problem is then solved by a Backward Differentiation Formula (BDF) method and the obtained solution is used to reconstruct an approximate solution of the original problem. This process is repeated, increasing the dimension of the projection subspace until achieving a chosen accuracy. We give some theoretical results and a simple expression of the residual allowing the implementation of a stop test in order to limit the dimension of the projection space. Some numerical experiments will be given.
Abstract. In the present paper, we consider large-scale differential Lyapunov matrix equations having a low rank constant term. We present two new approaches for the numerical resolution of such differential matrix equations. The first approach is based on the integral expression of the exact solution and an approximation method for the computation of the exponential of a matrix times a block of vectors. In the second approach, we first project the initial problem onto a block (or extended block) Krylov subspace and get a low-dimensional differential Lyapunov matrix equation. The latter differential matrix problem is then solved by the Backward Differentiation Formula method (BDF) and the obtained solution is used to build the low rank approximate solution of the original problem. The process being repeated until some prescribed accuracy is achieved. We give some new theoretical results and present some numerical experiments.
Abstract:In recent years, a great interest has been shown towards Krylov subspace techniques applied to model order reduction of large-scale dynamical systems. A special interest has been devoted to single-input single-output (SISO) systems by using moment matching techniques based on Arnoldi or Lanczos algorithms. In this paper, we consider multiple-input multiple-output (MIMO) dynamical systems and introduce the rational block Arnoldi process to design low order dynamical systems that are close in some sense to the original MIMO dynamical system. Rational Krylov subspace methods are based on the choice of suitable shifts that are selected a priori or adaptively. In this paper, we propose an adaptive selection of those shifts and show the efficiency of this approach in our numerical tests. We also give some new block Arnoldi-like relations that are used to propose an upper bound for the norm of the error on the transfer function.
In this paper, we propose a block Arnoldi method for solving the continuous low-rank Sylvester matrix equation AX C XB D EF T . We consider the case where both A and B are large and sparse real matrices, and E and F are real matrices with small rank. We first apply an alternating directional implicit preconditioner to our equation, turning it into a Stein matrix equation. We then apply a block Krylov method to the Stein equation to extract low-rank approximate solutions. We give some theoretical results and report numerical experiments to show the efficiency of this method.where the unknown matrix X 2 R n s , the coefficient matrices A 2 R n n , B 2 R s s and E 2 R n r , F 2 R s r are full rank with r n, s. Sylvester equations arise in numerous applied areas such as control and communication theory and model reduction problems [1][2][3]. The matrix Equation (1) appears also in the numerical solution of matrix differential Riccati equations, in decoupling techniques for ordinary and partial differential equations, and in filtering and image restoration (see, e.g., [4][5][6] and also the references [7,8]).The matrix Equation (1) can be reformulated as the ns ns linear system .
Summary
In the present paper, we propose Krylov‐based methods for solving large‐scale differential Sylvester matrix equations having a low‐rank constant term. We present two new approaches for solving such differential matrix equations. The first approach is based on the integral expression of the exact solution and a Krylov method for the computation of the exponential of a matrix times a block of vectors. In the second approach, we first project the initial problem onto a block (or extended block) Krylov subspace and get a low‐dimensional differential Sylvester matrix equation. The latter problem is then solved by some integration numerical methods such as the backward differentiation formula or Rosenbrock method, and the obtained solution is used to build the low‐rank approximate solution of the original problem. We give some new theoretical results such as a simple expression of the residual norm and upper bounds for the norm of the error. Some numerical experiments are given in order to compare the two approaches.
Face recognition and identification are very important applications in machine learning. Due to the increasing amount of available data, traditional approaches based on matricization and matrix PCA methods can be difficult to implement. Moreover, the tensorial approaches are a natural choice, due to the mere structure of the databases, for example in the case of color images. Nevertheless, even though various authors proposed factorization strategies for tensors, the size of the considered tensors can pose some serious issues. Indeed, the most demanding part of the computational effort in recognition or identification problems resides in the training process. When only a few features are needed to construct the projection space, there is no need to compute a SVD on the whole data. Two versions of the tensor Golub–Kahan algorithm are considered in this manuscript, as an alternative to the classical use of the tensor SVD which is based on truncated strategies. In this paper, we consider the Tensor Tubal Golub–Kahan Principal Component Analysis method which purpose it to extract the main features of images using the tensor singular value decomposition (SVD) based on the tensor cosine product that uses the discrete cosine transform. This approach is applied for classification and face recognition and numerical tests show its effectiveness.
In the present paper, we propose a preconditioned Newton-Block Arnoldi method for solving large continuous time algebraic Riccati equations. Such equations appear in control theory, model reduction, circuit simulation amongst other problems. At each step of the Newton process, we solve a large Lyapunov matrix equation with a low rank right hand side. These equations are solved by using the block Arnoldi process associated with a preconditioner based on the alternating direction implicit iteration method. We give some theoretical results and report numerical tests to show the effectiveness of the proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.