Quantum chaos is one of the distinctive features of the Sachdev-Ye-Kitaev (SYK) model, N Majorana fermions in 0 þ 1 dimensions with infinite-range two-body interactions, which is attracting a lot of interest as a toy model for holography. Here we show analytically and numerically that a generalized SYK model with an additional one-body infinite-range random interaction, which is a relevant perturbation in the infrared, is still quantum chaotic and retains most of its holographic features for a fixed value of the perturbation and sufficiently high temperature. However, a chaotic-integrable transition, characterized by the vanishing of the Lyapunov exponent and spectral correlations given by Poisson statistics, occurs at a temperature that depends on the strength of the perturbation. We speculate about the gravity dual of this transition. DOI: 10.1103/PhysRevLett.120.241603 Motivated by its potential applications in high-energy and condensed matter physics, and also because of its simplicity, research on fermionic models with infiniterange random interactions [1-9], now generally called Sachdev-Ye-Kitaev (SYK) models [10-13], has flourished in recent times [11,. Interesting research lines currently being investigated include not only applications in holography [10-13] but also in random matrix theory [25][26][27]30,32,34], possible experimental realizations [19,35,36], and extensions involving nonrandom couplings [24,28], higher spatial dimensions [18,21,31,37,38], and several flavors [39]. A natural question to ask [18,21,24,31,[37][38][39] is to what extent holographic properties are present in generalized SYK models. For instance, similar features are observed for nonrandom couplings [24] and in higher-dimensional realizations of the SYK [37,38] model. However, in some cases, the addition of more fermionic species can induce a transition to a Fermi liquid phase [31] or a metal-insulator transition [18,21], which, at least superficially, spoils a holographic interpretation. Here we study the stability of chaos and holographic features of a generalized SYK model consisting of N fermions in 0 þ 1 dimension with infinite-range two-body random interaction perturbed by a one-body random term
Teacher-student models provide a powerful framework in which the typical case performance of highdimensional supervised learning tasks can be studied in closed form. In this setting, labels are assigned to data -often taken to be Gaussian i.i.d. -by a teacher model, and the goal is to characterise the typical performance of the student model in recovering the parameters that generated the labels. In this manuscript we discuss a generalisation of this setting where the teacher and student can act on different spaces, generated with fixed, but generic feature maps. This is achieved via the rigorous study of a high-dimensional Gaussian covariate model. Our contribution is two-fold: First, we prove a rigorous formula for the asymptotic training loss and generalisation error achieved by empirical risk minimization for this model. Second, we present a number of situations where the learning curve of the model captures the one of a realistic data set learned with kernel regression and classification, with out-of-the-box feature maps such as random projections or scattering transforms, or with pre-learned ones -such as the features learned by training multi-layer neural networks. We discuss both the power and the limitations of the Gaussian teacher-student framework as a typical case analysis capturing learning curves as encountered in practice on real data sets.
We study generalised linear regression and classification for a synthetically generated dataset encompassing different problems of interest, such as learning with random features, neural networks in the lazy training regime, and the hidden manifold model. We consider the high-dimensional regime and using the replica method from statistical physics, we provide a closed-form expression for the asymptotic generalisation performance in these problems, valid in both the under- and over-parametrised regimes and for a broad choice of generalised linear model loss functions. In particular, we show how to obtain analytically the so-called double descent behaviour for logistic regression with a peak at the interpolation threshold, we illustrate the superiority of orthogonal against random Gaussian projections in learning with random features, and discuss the role played by correlations in the data generated by the hidden manifold model. Beyond the interest in these particular problems, the theoretical formalism introduced in this manuscript provides a path to further extensions to more complex tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.