We discuss the criteria that must be satisfied by a well-posed variational principle. We clarify the role of Gibbons-Hawking-York type boundary terms in the actions of higher derivative models of gravity, such as F (R) gravity, and argue that the correct boundary terms are the naive ones obtained though the correspondence with scalar-tensor theory, despite the fact that variations of normal derivatives of the metric must be fixed on the boundary. We show in the case of F (R) gravity that these boundary terms reproduce the correct ADM energy in the hamiltonian formalism, and the correct entropy for black holes in the semi-classical approximation. 1
We consider the late time behavior of the analytically continued partition function Z(β + it)Z(β − it) in holographic 2d CFTs. This is a probe of information loss in such theories and in their holographic duals. We show that each Virasoro character decays in time, and so information is not restored at the level of individual characters. We identify a universal decaying contribution at late times, and conjecture that it describes the behavior of generic chaotic 2d CFTs out to times that are exponentially large in the central charge. It was recently suggested that at sufficiently late times one expects a crossover to random matrix behavior. We estimate an upper bound on the crossover time, which suggests that the decay is followed by a parametrically long period of late time growth. Finally, we discuss gravitationally-motivated integrable theories and show how information is restored at late times by a series of characters. This hints at a possible bulk mechanism, where information is restored by an infinite sum over non-perturbative saddles.
Abstract:We derive an explicit bound on the dimension of the lightest charged state in two dimensional conformal field theories with a global abelian symmetry. We find that the bound scales with c and provide examples that parametrically saturate this bound. We also prove that any such theory must contain a state with charge-to-mass ratio above a minimal lower bound. We comment on the implications for charged states in three dimensional theories of gravity.
The test loss of well-trained neural networks often follows precise power-law scaling relations with either the size of the training dataset or the number of parameters in the network. We propose a theory that explains and connects these scaling laws. We identify variance-limited and resolution-limited scaling behavior for both dataset and model size, for a total of four scaling regimes. The variance-limited scaling follows simply from the existence of a well-behaved infinite data or infinite width limit, while the resolution-limited regime can be explained by positing that models are effectively resolving a smooth data manifold. In the large width limit, this can be equivalently obtained from the spectrum of certain kernels, and we present evidence that large width and large dataset resolution-limited scaling exponents are related by a duality. We exhibit all four scaling regimes in the controlled setting of large random feature and pretrained models and test the predictions empirically on a range of standard architectures and datasets. We also observe several empirical relationships between datasets and scaling exponents: super-classing image tasks does not change exponents, while changing input distribution (via changing datasets or adding noise) has a strong effect. We further explore the effect of architecture aspect ratio on scaling exponents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.