2017
DOI: 10.2298/fil1708381t
|View full text |Cite
|
Sign up to set email alerts
|

An accelerated Jacobi-gradient based iterative algorithm for solving sylvester matrix equations

Abstract: In this paper, an accelerated Jacobi-gradient based iterative (AJGI) algorithm for solving Sylvester matrix equations is presented, which is based on the algorithms proposed by Ding and Chen [6], Niu et al. [18] and Xie et al. [25]. Theoretical analysis shows that the new algorithm will converge to the true solution for any initial value under certain assumptions. Finally, three numerical examples are given to verify the eficiency of the accelerated algorithm proposed in this paper.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 35 publications
(28 citation statements)
references
References 25 publications
0
28
0
Order By: Relevance
“…Fixing , , the subproblem can be obtained by solving the following minimization problem. Recently, various acceleration techniques of iterative algorithms are proposed [55][56][57][58][59][60][61][62][63]. For this subproblem, in order to accelerate the convergence of the above iteration, we adopt the acceleration technique in [55] to solve it.…”
Section: Alternating Minimization Methodsmentioning
confidence: 99%
“…Fixing , , the subproblem can be obtained by solving the following minimization problem. Recently, various acceleration techniques of iterative algorithms are proposed [55][56][57][58][59][60][61][62][63]. For this subproblem, in order to accelerate the convergence of the above iteration, we adopt the acceleration technique in [55] to solve it.…”
Section: Alternating Minimization Methodsmentioning
confidence: 99%
“…where X = (vec(X 1 ); vec(X 2 )) and B = (vec(C 1 ); vec(C 2 )). It can be verified that M is rank deficient and GB method is semi-convergent and its the optimum value is µ opt = 0.00109 which is computed by (19) Here we compare the performance of GB, DGB-version 1 and DGB-version 2 to solve (22). We used the following stopping criterion…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…e Tikhonov regularisation (TR) method [21][22][23], truncated singular value method [24,25], kernel function-based regularisation method [26,27], and l 1 norm regularisation method [28] are often used to solve ill-posed problems. When estimating nonlinear parameters, iterative search methods such as the Gauss-Newton method, steepest gradient method, and LM method are commonly used [29][30][31][32][33][34]. In these methods, the Jacobian matrix has a strong influence on the computational efficiency of the algorithm.…”
Section: Introductionmentioning
confidence: 99%