2020
DOI: 10.1177/0142331219899745
|View full text |Cite
|
Sign up to set email alerts
|

Truncated model reduction methods for linear time-invariant systems via eigenvalue computation

Abstract: This paper provides three model reduction methods for linear time-invariant systems in the view of the Riemannian Newton method and the Jacobi-Davidson method. First, the computation of Hankel singular values is converted into the linear eigenproblem by the similarity transformation. The Riemannian Newton method is used to establish the model reduction method. Besides, we introduce the Jacobi-Davidson method with the block version for the linear eigenproblem and present the corresponding model reduction method… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…Therefore, model order reduction becomes important and popular. So far, many effective methods have emerged to reduce the large-scale linear time invariant (LTI) models, such as Krylov subspace method (Grimme, 1997; Yuan et al, 2018), balanced truncation method (Haider et al, 2017; Yang and Jiang, 2020; Zhou et al, 2001), orthogonal decomposition method (Yuan et al, 2018), optimal model order reduction method (Jiang and Xu, 2017; Sato and Sato, 2017; Vasu et al, 2018), and so on.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, model order reduction becomes important and popular. So far, many effective methods have emerged to reduce the large-scale linear time invariant (LTI) models, such as Krylov subspace method (Grimme, 1997; Yuan et al, 2018), balanced truncation method (Haider et al, 2017; Yang and Jiang, 2020; Zhou et al, 2001), orthogonal decomposition method (Yuan et al, 2018), optimal model order reduction method (Jiang and Xu, 2017; Sato and Sato, 2017; Vasu et al, 2018), and so on.…”
Section: Introductionmentioning
confidence: 99%