We propose a scenario where inflation is driven by non-minimally coupled massive vector fields. In an isotropic homogeneous universe these fields behave in presicely the same way as a massive minimally coupled scalar field. Therefore our model is very similar to the model of chaotic inflation with scalar field. For vector fields the isotropy of expansion is achived either by considering a triplet of orthogonal vector fields or for the expense of N randomly oriented vector fields. In the last case the substantial anisotropy of the expansion of order 1/ √ N survives until the end of inflation. The lightest vector fields might also force the late time acceleration of the Universe.
We study the spectrum of loops as a part of a complete network of cosmic strings in flat spacetime. After a long transient regime, characterized by production of small loops at the scale of the initial conditions, it appears that a true scaling regime takes over. In this final regime the characteristic length of loops scales as $0.1 t$, in contrast to earlier simulations which found tiny loops. We expect the expanding-universe behavior to be qualitatively similar. The large loop sizes have important cosmological implications. In particular, the nucleosynthesis bound becomes $G\mu \lesssim 10^{-7}$, much tighter than before.Comment: Added discussion of gravitational wave bounds; other minor change
We study the production of loops in the cosmic string network in the expanding background by means of a numerical simulation exact in the flat-spacetime limit and first-order in the expansion rate. We find an initial regime characterized by production of small loops at the scale of the initial correlation length, but later we see the emergence of a scaling regime of loop production. This qualitatively agrees with earlier expectations derived from the results of flat-spacetime simulations.In the final scaling regime we find that the characteristic length of loops scales as ∼ 0.1t in both radiation and matter eras.
Models of inflationary cosmology can lead to variation of observable parameters ("constants of Nature") on extremely large scales. The question of making probabilistic predictions for today's observables in such models has been investigated in the literature. Because of the infinite thermalized volume resulting from eternal inflation, it has proven difficult to obtain a meaningful and unambiguous probability distribution for observables, in particular due to the gauge dependence. In the present paper, we further develop the gaugeinvariant procedure proposed in a previous work for models with a continuous variation of "constants". The recipe uses an unbiased selection of a connected piece of the thermalized volume as sample for the probability distribution. To implement the procedure numerically, we develop two methods applicable to a reasonably wide class of models: one based on the Fokker-Planck equation of stochastic inflation, and the other based on direct simulation of inflationary spacetime. We present and compare results obtained using these methods.
We investigate the evolution of infinite strings as a part of a complete cosmic string network in flat space. We perform a simulation of the network which uses functional forms for the string position and thus is exact to the limits of computer arithmetic. Our results confirm that the wiggles on the strings obey a scaling law described by universal power spectrum. The average distance between long strings also scales accurately with the time. These results suggest that small-scale structure will also scale in expanding universe, even in the absence of gravitational damping.Comment: 13 pages,7 figure
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton–Jacobi equations (with free energy representing the Hamilton’s principal function). This shows that the trainable variables can indeed exhibit classical and quantum behaviors with the state vector of neurons representing the hidden variables. We then study stochastic evolution of the hidden variables by considering D non-interacting subsystems with average state vectors, x¯1, …, x¯D and an overall average state vector x¯0. In the limit when the weight matrix is a permutation matrix, the dynamics of x¯μ can be described in terms of relativistic strings in an emergent D+1 dimensional Minkowski space-time. If the subsystems are minimally interacting, with interactions that are described by a metric tensor, and then the emergent space-time becomes curved. We argue that the entropy production in such a system is a local function of the metric tensor which should be determined by the symmetries of the Onsager tensor. It turns out that a very simple and highly symmetric Onsager tensor leads to the entropy production described by the Einstein–Hilbert term. This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors that were described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other.
We analyze the behavior of linear perturbations in vector inflation. In contrast to the scalar field inflation, the linearized theory with vector fields contains couplings between scalar, vector and tensor modes. The perturbations decouple only in the ultraviolet limit, which allows us to carry out the canonical quantization. Superhorizon perturbations can be approximately analyzed due to suppressed mixing between different modes in the small fields models. We find that the vector perturbations of the metric decay exponentially, but the scalar and tensor modes could remain weakly coupled throughout the evolution. As a result, the vector inflation can produce significant correlations of the scalar and tensor modes in the CMB. For the realistic models the effect is rather small, but not negligible.Comment: minor changes, some references added; accepted for publication in Physical Review
We define a neural network as a septuple consisting of (1) a state vector, (2) an input projection, (3) an output projection, (4) a weight matrix, (5) a bias vector, (6) an activation map and (7) a loss function. We argue that the loss function can be imposed either on the boundary (i.e. input and/or output neurons) or in the bulk (i.e. hidden neurons) for both supervised and unsupervised systems. We apply the principle of maximum entropy to derive a canonical ensemble of the state vectors subject to a constraint imposed on the bulk loss function by a Lagrange multiplier (or an inverse temperature parameter). We show that in an equilibrium the canonical partition function must be a product of two factors: a function of the temperature, and a function of the bias vector and weight matrix. Consequently, the total Shannon entropy consists of two terms which represent, respectively, a thermodynamic entropy and a complexity of the neural network. We derive the first and second laws of learning: during learning the total entropy must decrease until the system reaches an equilibrium (i.e. the second law), and the increment in the loss function must be proportional to the increment in the thermodynamic entropy plus the increment in the complexity (i.e. the first law). We calculate the entropy destruction to show that the efficiency of learning is given by the Laplacian of the total free energy, which is to be maximized in an optimal neural architecture, and explain why the optimization condition is better satisfied in a deep network with a large number of hidden layers. The key properties of the model are verified numerically by training a supervised feedforward neural network using the stochastic gradient descent method. We also discuss a possibility that the entire Universe at its most fundamental level is a neural network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.