2014
DOI: 10.1051/0004-6361/201322177
|View full text |Cite
|
Sign up to set email alerts
|

Compressed convolution

Abstract: We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal e… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…(2.2) given the latest realization of t, and finally transform the result back to the original basis. It can be shown that the signal reconstruction s converges exponentially and unconditionally to the Wiener filter solution s WF (Elsner & Wandelt 2012a). Based on a comparison to more standard conjugate gradient solvers, we find the final map to be accurate to about 1 part in 10 5 , depending on the adopted stopping criterion.…”
Section: Methodsmentioning
confidence: 99%
“…(2.2) given the latest realization of t, and finally transform the result back to the original basis. It can be shown that the signal reconstruction s converges exponentially and unconditionally to the Wiener filter solution s WF (Elsner & Wandelt 2012a). Based on a comparison to more standard conjugate gradient solvers, we find the final map to be accurate to about 1 part in 10 5 , depending on the adopted stopping criterion.…”
Section: Methodsmentioning
confidence: 99%
“…In these cases, numerical implementations of the Wiener filter have traditionally relied on Krylov space methods, such as conjugate gradients, to solve the high-dimensional systems of equations (see e.g., Kitaura & Enßlin 2008 and references therein). Recently a particularly elegant approach to solving the Wiener filter equation was proposed, where an additional messenger field is introduced to mediate between two different bases in which the signal and noise covariances are respectively sparse, bypassing the issue of directly inverting the high-dimensional matrices (Elsner & Wandelt 2012Jasche & Lavaux 2015). We adopt this approach in §3.2 and apply it to simulated data in §5.…”
Section: Map Samplingmentioning
confidence: 99%
“…Fortunately, direct inversion of (C −1 + N −1 ) can be avoided by introducing an auxiliary Gaussian distributed messenger field t that mediates between the bases in which C and N are respectively sparse. This elegant idea was introduced by Elsner & Wandelt (2012 and further developed by Jasche & Lavaux (2015); the approach taken here is close to Jasche & Lavaux T T P (t|s, T) P (t|s, T)…”
Section: Messenger Fieldmentioning
confidence: 99%
“…To make the KSW estimator optimal for a non-uniform sky coverage, it is necessary to perform an inverse covariance weighting with the non-diagonal covariance matrix. This is a computationally challenging problem (Smith et al 2009;Elsner & Wandelt 2012). It was noted in Planck Collaboration XXIV (2014) that one can also achieve excellent results by assuming a diagonal covariance matrixĈ l = C l + N l , where N l assumes homogeneous noise, and by using a diffusive inpainting on the masked areas.…”
Section: Ksw Estimatormentioning
confidence: 99%