2009
DOI: 10.1016/j.enganabound.2008.05.006
|View full text |Cite
|
Sign up to set email alerts
|

On fast matrix–vector multiplication in wavelet Galerkin BEM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 15 publications
(16 citation statements)
references
References 20 publications
0
16
0
Order By: Relevance
“…We have shown that, by using the wavelets with QVMs and the a-posteriori compression, the memory requirement of the WGBEM can be reduced by a factor greater than six. The resulting memory requirement is typically less than that of the FMM being transformed from the WGBEM [21]. Moreover, after compressions the complexity of the method scales almost linearly.…”
Section: Resultsmentioning
confidence: 86%
See 2 more Smart Citations
“…We have shown that, by using the wavelets with QVMs and the a-posteriori compression, the memory requirement of the WGBEM can be reduced by a factor greater than six. The resulting memory requirement is typically less than that of the FMM being transformed from the WGBEM [21]. Moreover, after compressions the complexity of the method scales almost linearly.…”
Section: Resultsmentioning
confidence: 86%
“…Here we compare the memory allocation of both methods. This is based on the fact that the two methods can be transformed from each other [21]. In WGBEM most memory is used to store the NS-form while in the FMM most memory goes into storing direct interactions of neighboring cubes in the finest level.…”
Section: Numerical Examplesmentioning
confidence: 99%
See 1 more Smart Citation
“…However, some of the nonzero entries of the sparse matrix still have small values which cannot affect the order of convergence of the wavelet Galerkin BEM. By setting some entries to zero for which the difference of the wavelet levels is sufficiently large, a sparser matrix can be obtained, which contains only O(N) nonzero entries [19,22,28,29].…”
Section: Matrix Compressionmentioning
confidence: 99%
“…The relation between the WGBEM and the method based on low-rank approximation, e.g. FMM, is discussed in [32].…”
Section: Introductionmentioning
confidence: 99%