2015
DOI: 10.1137/140958529
|View full text |Cite
|
Sign up to set email alerts
|

Fast Multidimensional Convolution in Low-Rank Tensor Formats via Cross Approximation

Abstract: We propose new cross-conv algorithm for approximate computation of convolution in different low-rank tensor formats (tensor train, Tucker, Hierarchical Tucker). It has better complexity with respect to the tensor rank than previous approaches. The new algorithm has a high potential impact in different applications. The key idea is based on applying cross approximation in the "frequency domain", where convolution becomes a simple elementwise product. We illustrate efficiency of our algorithm by computing the th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 43 publications
(44 citation statements)
references
References 61 publications
0
44
0
Order By: Relevance
“…For the general case, it has been proven that when the intersection submatrix W is of maximum volume 7 , the matrix cross-approximation is close to the optimal SVD solution. The problem of finding a submatrix with maximum volume has exponential complexity, however, suboptimal matrices can be found using fast greedy algorithms [4,144,179,222].…”
Section: Matrix/tensor Cross-approximation (Mca/tca)mentioning
confidence: 99%
“…For the general case, it has been proven that when the intersection submatrix W is of maximum volume 7 , the matrix cross-approximation is close to the optimal SVD solution. The problem of finding a submatrix with maximum volume has exponential complexity, however, suboptimal matrices can be found using fast greedy algorithms [4,144,179,222].…”
Section: Matrix/tensor Cross-approximation (Mca/tca)mentioning
confidence: 99%
“…For the canonical decomposition power method with shifts was generalized in [33,34] and used in the RRBPM method. The preconditioned inverse iteration (PINVIT) for tensor formats was considered in [22,23,35]. The inverse iteration used in this paper differs from the PIN-VIT, which is basically preconditioned steepest descent.…”
Section: Related Workmentioning
confidence: 99%
“…Since the algorithm is based on SVD, it is stable and the low multilinear rank approximation always exists. Tucker decomposition can be constructed by using only some rows, columns and fibers of A with a cross-Tucker approximation algorithm [39], [40] with linear complexity O (nr 1 r 2 r 3 ) over the O n 3 complexity of HOSVD (n denotes tensor's linear size). Such algorithms are based on wellknown 2D cross approximation methods [54]- [57].…”
Section: A Higher Order Svdmentioning
confidence: 99%
“…Specifically, Tucker decomposition provides an optimal fit and a stable approximation for three-dimensional (3D) tensors [33]. It can be implemented either with the wellknown higher-order singular value decomposition (HOSVD) [34], which has been successfully applied in multidimensional data analysis in the past years [35]- [38], or with cross approximation-based techniques [39], [40]. Another interesting decomposition is the canonical polyadic (CP) model [41]- [44], which gives the most compact representation of the initial arXiv:1811.00484v3 [cs.NA] 16 Jul 2019 array.…”
Section: Introductionmentioning
confidence: 99%