2014
DOI: 10.1103/physreve.90.033315
|View full text |Cite
|
Sign up to set email alerts
|

Faster identification of optimal contraction sequences for tensor networks

Abstract: The efficient evaluation of tensor expressions involving sums over multiple indices is of significant importance to many fields of research, including quantum many-body physics, loop quantum gravity, and quantum chemistry. The computational cost of evaluating an expression may depend strongly on the order in which the index sums are evaluated, and determination of the operation-minimizing contraction sequence for a single tensor network (single term, in quantum chemistry) is known to be NP-hard. The current pr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
78
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(77 citation statements)
references
References 65 publications
(106 reference statements)
0
78
0
Order By: Relevance
“…Whilst algorithms which can speed up tensor network contractions by optimising the bubbling used [3][4][5], as discusssed above, the underlying computational problem is NP-complete [6,7] Even ignoring the specific bubbling used, the complexity of the overall contraction procedure can also be shown to be prohibitive in general. Consider a network made from the binary tensors e and n. The value of e is 1 if and only if all indices are identical, and zero otherwise, whilst n has value 1 if and only if all legs differ and 0 otherwise.…”
Section: Computational Complexitymentioning
confidence: 99%
See 1 more Smart Citation
“…Whilst algorithms which can speed up tensor network contractions by optimising the bubbling used [3][4][5], as discusssed above, the underlying computational problem is NP-complete [6,7] Even ignoring the specific bubbling used, the complexity of the overall contraction procedure can also be shown to be prohibitive in general. Consider a network made from the binary tensors e and n. The value of e is 1 if and only if all indices are identical, and zero otherwise, whilst n has value 1 if and only if all legs differ and 0 otherwise.…”
Section: Computational Complexitymentioning
confidence: 99%
“…Pfeifer et al [3] provides code which allows for finding optimal bubbling order for networks of up to 30-40 tensors. This code interfaces with that provided in [4] and [5], providing a complete tensor network package.…”
Section: Bubblingmentioning
confidence: 99%
“…The graph-based notation became standard in tensor network literature a decade ago. [15] Following Markov and Shi, several groups developed highly efficient algorithms for quantum circuit simulation based on this representation, see 6,11,16, and 17 for more details. The previous margin of 50 qubits was lifted, as is demonstrated by multiple authors.…”
Section: Related Workmentioning
confidence: 99%
“…As we mentioned, the cost of contraction of the tensor network without Ω and Q † is O(χ 6 ). The order of contractions is important to reduce the computational cost [23]. For example, the computational cost of Y = S [1] (S [2] (S [3] (S [4] Ω))) scales as O(χ 5 ), but Y = (((S [1] S [2] )S [3] )S [4] )Ω scales as O(χ 6 ).…”
Section: B Randomized Algorithm For Svdmentioning
confidence: 99%