2013
DOI: 10.1002/asjc.805
|View full text |Cite
|
Sign up to set email alerts
|

A Relaxed Gradient Based Algorithm for Solving Extended Sylvester‐Conjugate Matrix Equations

Abstract: In this paper, a relaxed gradient based algorithm for solving extended Sylvester-conjugate matrix equations by considering a relaxation parameter is proposed. The convergence analysis of the algorithm is investigated. Theoretical analysis shows that the new method converges under certain assumptions. A numerical example is given to illustrate effectiveness of the proposed method and to test its efficiency compared with an existing one.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
26
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 19 publications
(28 citation statements)
references
References 31 publications
0
26
0
Order By: Relevance
“…Ramadan et al presented the following algorithm for solving equation . X1()k=X1()k1+μ()1ωAH()FAX1()k1BCtrueX1()k1¯DBH X2()k=X2()k1+italicμωCtrue¯H()trueF¯trueAX1()k1B¯trueC¯X()k1trueD¯Dtrue¯H as the approximate solution X ( k ) is wanted rather than X 1 ( k ) and X 2 ( k ), so we propose the following balanced strategy to form the k − th approximate solution X()k=ωX1()k+()1ωX2()k where ω is a relaxation parameter satisfying 0 < ω < 1, it controls the relative importance of two residual matrices.…”
Section: Preliminariesmentioning
confidence: 99%
See 3 more Smart Citations
“…Ramadan et al presented the following algorithm for solving equation . X1()k=X1()k1+μ()1ωAH()FAX1()k1BCtrueX1()k1¯DBH X2()k=X2()k1+italicμωCtrue¯H()trueF¯trueAX1()k1B¯trueC¯X()k1trueD¯Dtrue¯H as the approximate solution X ( k ) is wanted rather than X 1 ( k ) and X 2 ( k ), so we propose the following balanced strategy to form the k − th approximate solution X()k=ωX1()k+()1ωX2()k where ω is a relaxation parameter satisfying 0 < ω < 1, it controls the relative importance of two residual matrices.…”
Section: Preliminariesmentioning
confidence: 99%
“… X1()k=X1()k1+μ()1ωAH()FAX1()k1BCtrueX1()k1¯DBH X2()k=X2()k1+italicμωCtrue¯H()trueF¯trueAX1()k1B¯trueC¯X()k1trueD¯Dtrue¯H as the approximate solution X ( k ) is wanted rather than X 1 ( k ) and X 2 ( k ), so we propose the following balanced strategy to form the k − th approximate solution X()k=ωX1()k+()1ωX2()k where ω is a relaxation parameter satisfying 0 < ω < 1, it controls the relative importance of two residual matrices. It is shown in that the relaxed gradient iterative algorithm converges as long as 0<μ<2ω()1ω()λmax()italicAAHλmax()BHB+λmax()trueC¯Ctrue¯Hλmax()Dtrue¯HtrueD¯+2σmax()trueD¯<...>…”
Section: Preliminariesmentioning
confidence: 99%
See 2 more Smart Citations
“…Such problems arise in the solution of large eigenvalue problems and in the boundary value problems . They play an important role in linear control and filtering theory for continuous or discrete‐time large‐scale dynamical systems, image restoration and processing, and other applications such as model reduction, numerical solution of matrix differential Riccati equations, and many more; see .…”
Section: Introductionmentioning
confidence: 99%