2015
DOI: 10.4171/aihpd/14
|View full text |Cite
|
Sign up to set email alerts
|

Tensor models from the viewpoint of matrix models: the cases of loop models on random surfaces and of the Gaussian distribution

Abstract: Observables in random tensor theory are polynomials in the entries of tensor of rank d which are invariant under U (N ) d . It is notoriously difficult to evaluate the expectations of such polynomials, even in the Gaussian distribution. In this article, we introduce singular value decompositions to evaluate the expectations of polynomial observables of Gaussian random tensors. Performing the matrix integrals over the unitary group leads to a notion of effective observables which expand onto regular, matrix tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 63 publications
(168 reference statements)
0
16
0
Order By: Relevance
“…In our framework, the additional parameter D allows one to study the large D expansion of the planar diagrams. As highlighted in [20,21], this large D expansion is of fundamentally different nature depending on whether one uses the BGR scaling, as in [18], or the new enhanced scaling we propose, for the couplings τ a . With the BGR scaling, it is straightforward to show that the large N and the large D limits commute, whereas with the new scaling, it is essential to take N → ∞ first and D → ∞ second for the limit to make sense.…”
Section: Matrix-tensor Models and Applicationsmentioning
confidence: 98%
See 1 more Smart Citation
“…In our framework, the additional parameter D allows one to study the large D expansion of the planar diagrams. As highlighted in [20,21], this large D expansion is of fundamentally different nature depending on whether one uses the BGR scaling, as in [18], or the new enhanced scaling we propose, for the couplings τ a . With the BGR scaling, it is straightforward to show that the large N and the large D limits commute, whereas with the new scaling, it is essential to take N → ∞ first and D → ∞ second for the limit to make sense.…”
Section: Matrix-tensor Models and Applicationsmentioning
confidence: 98%
“…It was of course noticed that to the independent indices of tensor models can correspond spaces of different dimensions, so that tensor models in fact generalize the Wishart theory of random rectangular matrices. An interesting example, initiated in [18], consists in singling out two indices out of R = r + 2, (a 1 · · · a R ) = (abµ 1 · · · µ r ), and rewrite the tensor T in terms of a matrix X, with r additional tensor indices, T a 1 ···a R = (X µ 1 ···µr ) a b .…”
Section: Matrix-tensor Models and Applicationsmentioning
confidence: 99%
“…The latter is subdominant in the large-N limit, but the factor x 2(N 2 −N ) i is not, and must be taken into account. However, unlike the Vandermonde determinant, such term does not couple different eigenvalues, hence in the large-N limit we have a simple saddle point equation in which the eigenvalues are mutually decoupled: 16 For tensor models with a rectangular-matrix interpretation see also [60,61].…”
Section: A γ-Matrices In Odd-dimensionsmentioning
confidence: 99%
“…Since then, matrix models have been increasingly applied in many areas of mathematics and physics such as number theory, string theory, quantum gravity, holography, etc. Recently, matrix models have appeared to be intriguingly related to tensor theories [28][29][30]. However, the connection remains unclear so far.…”
Section: Introductionmentioning
confidence: 99%