2019
DOI: 10.1088/1751-8121/ab434b
|View full text |Cite
|
Sign up to set email alerts
|

RTNI—A symbolic integrator for Haar-random tensor networks

Abstract: We provide a computer algebra package called Random Tensor Network Integrator (RTNI). It allows to compute averages of tensor networks containing multiple Haar-distributed random unitary matrices and deterministic symbolic tensors. Such tensor networks are represented as multigraphs, with vertices corresponding to tensors or random unitaries and edges corresponding to tensor contractions. Input and output spaces of random unitaries may be subdivided into arbitrary tensor factors, with dimensions treated symbol… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 37 publications
(45 reference statements)
0
14
0
Order By: Relevance
“…Finally, let us remark that while we employ a sub-space search algorithm, in the presence of barren plateaus all optimization methods will (on average) fail unless the algorithm has a precision (i.e., number of shots) that grows exponentially with n. The latter is due to the fact that an exponentially vanishing gradient implies that on average the cost function landscape will essentially be flat, with the slope of the order of Oð1=2 n Þ. Hence, unless one has a precision that can detect such small changes in the cost value, one will not be able to determine a cost minimization direction with gradient-based, or even with black-box optimizers such as the Nelder-Mead method [58][59][60][61] .…”
Section: Methodsmentioning
confidence: 99%
“…Finally, let us remark that while we employ a sub-space search algorithm, in the presence of barren plateaus all optimization methods will (on average) fail unless the algorithm has a precision (i.e., number of shots) that grows exponentially with n. The latter is due to the fact that an exponentially vanishing gradient implies that on average the cost function landscape will essentially be flat, with the slope of the order of Oð1=2 n Þ. Hence, unless one has a precision that can detect such small changes in the cost value, one will not be able to determine a cost minimization direction with gradient-based, or even with black-box optimizers such as the Nelder-Mead method [58][59][60][61] .…”
Section: Methodsmentioning
confidence: 99%
“…These involve the fourth moment of the Haar measure, which results in cumbersome analytical expressions of hundreds of coefficients coming from the Weingarten calculus [48]. We deal with these analytically with the recently introduced RTNI package [60]. In contrast, the calculations with the same model of e.g., [7] only involve second moments, which can be done by hand.…”
Section: Random Matrix Theory Analysismentioning
confidence: 99%
“…Proof. To calculate the integral, we employ the Weingarten formula [26][27][28], which for the relevant case reads:…”
Section: And Arbitrary Operators a ∈ B(hmentioning
confidence: 99%