We investigate the use of variational wave-functions that mimic stochastic recurrent neural networks, specifically, unrestricted Boltzmann machines, as guiding functions in projective quantum Monte Carlo (PQMC) simulations of quantum spin models. As a preliminary step, we investigate the accuracy of such unrestricted neural network states as variational Ansätze for the ground state of the ferromagnetic quantum Ising chain. We find that by optimizing just three variational parameters, independently on the system size, accurate ground-state energies are obtained, comparable to those previously obtained using restricted Boltzmann machines with few variational parameters per spin. Chiefly, we show that if one uses optimized unrestricted neural network states as guiding functions for importance sampling the efficiency of the PQMC algorithms is greatly enhanced, drastically reducing the most relevant systematic bias, namely that due to the finite random-walker population. The scaling of the computational cost with the system size changes from the exponential scaling characteristic of PQMC simulations performed without importance sampling, to a polynomial scaling, even at the ferromagnetic quantum critical point. The important role of the protocol chosen to sample hidden-spins configurations, in particular at the critical point, is analyzed. We discuss the implications of these findings for what concerns the problem of simulating adiabatic quantum optimization using stochastic algorithms on classical computers.
Many important challenges in science and technology can be cast as optimization problems. When viewed in a statistical physics framework, these can be tackled by simulated annealing, where a gradual cooling procedure helps search for groundstate solutions of a target Hamiltonian. While powerful, simulated annealing is known to have prohibitively slow sampling dynamics when the optimization landscape is rough or glassy. Here we show that by generalizing the target distribution with a parameterized model, an analogous annealing framework based on the variational principle can be used to search for groundstate solutions. Modern autoregressive models such as recurrent neural networks provide ideal parameterizations since they can be exactly sampled without slow dynamics even when the model encodes a rough landscape. We implement this procedure in the classical and quantum settings on several prototypical spin glass Hamiltonians, and find that it significantly outperforms traditional simulated annealing in the asymptotic limit, illustrating the potential power of this yet unexplored route to optimization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.