2008
DOI: 10.1007/s00211-008-0143-0
|View full text |Cite
|
Sign up to set email alerts
|

Approximate iterations for structured matrices

Abstract: Important matrix-valued functions f (A) are, e.g., the inverse A −1 , the square root √ A and the sign function. Their evaluation for large matrices arising from pdes is not an easy task and needs techniques exploiting appropriate structures of the matrices A and f (A) (often f (A) possesses this structure only approximately). However, intermediate matrices arising during the evaluation may lose the structure of the initial matrix. This would make the computations inefficient and even infeasible. However, the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
66
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
8

Relationship

4
4

Authors

Journals

citations
Cited by 83 publications
(67 citation statements)
references
References 38 publications
1
66
0
Order By: Relevance
“…If one makes sure that z k −App ε k ≤ c z k −x * and if x k −−−→ k→∞ x * at least quadratically, we have that y k −−−→ k→∞ x * with the same rate of convergence as x k −−−→ k→∞ x * . For a complete convergence analysis of inexact iterations we refer to [10,Hackbusch,Khoromskij and Tyrtyshnikov]. The computation of the pointwise inverse of a tensor u ∈ T with u i = 0 for all i ∈ {1, .…”
Section: Inexact Iterationsmentioning
confidence: 99%
“…If one makes sure that z k −App ε k ≤ c z k −x * and if x k −−−→ k→∞ x * at least quadratically, we have that y k −−−→ k→∞ x * with the same rate of convergence as x k −−−→ k→∞ x * . For a complete convergence analysis of inexact iterations we refer to [10,Hackbusch,Khoromskij and Tyrtyshnikov]. The computation of the pointwise inverse of a tensor u ∈ T with u i = 0 for all i ∈ {1, .…”
Section: Inexact Iterationsmentioning
confidence: 99%
“…3 The Tucker vs. canonical approximation of the 3D Newton/Yukawa kernels Fig. 4 Convergence history for the Tucker model applied to f 1,κ , f 2,κ , κ ∈ [1,15] which can be satisfied only on very coarse grids (recall that for nonoscillating kernels we have r = O(log n)). However, for the case of oscillating kernels we may have κ = Cn, i.e., the canonical decomposition would be preferable.…”
Section: Complexity Issues and Numericsmentioning
confidence: 99%
“…Figure 3 represents the convergence history for the best orthogonal Tucker [23] vs. canonical [2] approximations of the Newton/Yukawa potentials on an n × n × n grid for n = 2048. Figure 4 shows the convergence history for the Tucker model applied to f 1,κ , f 2,κ depending on κ ∈ [1,15] and termination criteria with fixed values ε 1 > 0 Convergence for the Tucker and canonical models applied to f 2,κ , κ ∈ [1,15] is presented in Figures 5 and 6, respectively. Figure 5, right clearly demonstrates exponential convergence in the Tucker rank r in the interval r ≥ r 0 = κ (supports the theory).…”
Section: Complexity Issues and Numericsmentioning
confidence: 99%
See 1 more Smart Citation
“…To overcome this issue, efficient recompression methods have been developed in [1,2,5,8,9] to approximate a given sum of elementary tensors by low rank tensors. Moreover, the convergence of such approximate iterations is known, see [16] for an analysis. The subject of this article is in contrast to this approach.…”
Section: Introductionmentioning
confidence: 99%