2011
DOI: 10.1016/j.parco.2010.12.002
|View full text |Cite
|
Sign up to set email alerts
|

A mixed-precision algorithm for the solution of Lyapunov equations on hybrid CPU–GPU platforms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0
1

Year Published

2011
2011
2020
2020

Publication Types

Select...
5
2

Relationship

6
1

Authors

Journals

citations
Cited by 35 publications
(20 citation statements)
references
References 25 publications
(27 reference statements)
0
18
0
1
Order By: Relevance
“…ComputeR k+1 via a matrix-matrix product (n 2 ×k c flops) Therefore, most of the computational effort is concentrated on the calculation of the matrix inverse A k −1 . This is reinforced from the numerical results reported in a previous work, where the same method is employed to solve a single Lyapunov equation (only steps 1 to 3 are required) with general dense coefficient matrices [5]. In that work, despite the use of a GPU to accelerate the computation of the inverse, this operation represented the 85% and 91% of the total computation time for two problems of dimension 5, 177 and 9, 699 respectively.…”
Section: The Sign Function Methodsmentioning
confidence: 78%
See 1 more Smart Citation
“…ComputeR k+1 via a matrix-matrix product (n 2 ×k c flops) Therefore, most of the computational effort is concentrated on the calculation of the matrix inverse A k −1 . This is reinforced from the numerical results reported in a previous work, where the same method is employed to solve a single Lyapunov equation (only steps 1 to 3 are required) with general dense coefficient matrices [5]. In that work, despite the use of a GPU to accelerate the computation of the inverse, this operation represented the 85% and 91% of the total computation time for two problems of dimension 5, 177 and 9, 699 respectively.…”
Section: The Sign Function Methodsmentioning
confidence: 78%
“…In previous works we have addressed the cases where the state-space matrix A is sparse [4], a general dense matrix [5], and a band matrix [6]. In this work we focus in the case when −A is a dense symmetric positive definite (SPD) matrix.…”
Section: Introductionmentioning
confidence: 99%
“…The necessary theory was derived in [80] and [76,Chapter 10]. We will provide this here in some detail, as the defect correction principle has, despite its importance for practice, received little attention since, and in particular new computing platforms including hardware accelerators may use this principle for obtaining fast and reliable algorithms, as has been already suggested for Lyapunov equation in [18].…”
Section: Defect Correctionmentioning
confidence: 97%
“…An interesting aspect of further research would be to derive a mixed precision CPU-GPU variant of Algorithm DC_CARE in the fashion of [18]. Also, in case of large-scale sparse solvers for CAREs as recently reviewed in [32], where low-rank approximations to the stabilizing solution are computed, it would be necessary to be able to represent Q F in low-rank format to use this concept.…”
Section: Theorem 8 Suppose the Invariant Subspace Of The Hamiltonian mentioning
confidence: 99%
“…In recent years, Graphics Processing Units (GPUs) have shown remarkable performance in the computation of large-scale matrix operations, and particularly, in the solution of matrix equations; see [4,5] among others. In addition to its performance, GPUs present other interesting properties, such as a low Watt-per-floating-point arithmetic operation ratio and an afforable price.…”
Section: Introductionmentioning
confidence: 99%