Practical challenges in simulating quantum systems on classical computers have been widely recognized in the quantum physics and quantum chemistry communities over the past century. Although many approximation methods have been introduced, the complexity of quantum mechanics remains hard to appease. The advent of quantum computation brings new pathways to navigate this challenging and complex landscape. By manipulating quantum states of matter and taking advantage of their unique features such as superposition and entanglement, quantum computers promise to efficiently deliver accurate results for many important problems in quantum chemistry, such as the electronic structure of molecules. In the past two decades, significant advances have been made in developing algorithms and physical hardware for quantum computing, heralding a revolution in simulation of quantum systems. This Review provides an overview of the algorithms and results that are relevant for quantum chemistry. The intended audience is both quantum chemists who seek to learn more about quantum computing and quantum computing researchers who would like to explore applications in quantum chemistry.
The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide new methods of training quantum Boltzmann machines, which are a class of recurrent quantum neural network. Our work generalizes existing methods and provides new approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of quantum state tomography that not only estimates a state but provides a prescription for generating copies of the reconstructed state. Classical Boltzmann machines are incapable of this. Finally we compare small non-stoquastic quantum Boltzmann machines to traditional Boltzmann machines for generative tasks and observe evidence that quantum models outperform their classical counterparts.Introduction-The Boltzmann machine is a widely used type of recurrent neural net that, unlike the feed forward neural nets used in many applications, is capable of generating new examples of the training data [1]. This makes it an excellent model to use in cases where data is missing. We focus on Boltzmann machines because, of all neural net models, the Boltzmann machine is perhaps the most natural one for physicists. It models the input data as if it came from an Ising model in thermal equilibrium. The goal of training is then to find the Ising model that is most likely to reproduce the input data which is known as a training set.The close analogy between this model and physics has made it a natural fit for quantum computing and quantum annealing. A number of proposals have been put forward for accelerating Boltzmann machines in current generation quantum annealers [2-4] and quantum computers [5], the latter showing polynomial speedups relative to classical training [6]. While these methods showed that quantum technologies can train Boltzmann machines more accurately and at lower cost than classical methods, the question of whether transitioning from an Ising model to a quantum model for the data would provide substantial improvements.This question is addressed in [7], wherein a new method for training Boltzmann machines is provided that uses transverse Ising models in thermal equilibrium to model the data. While such models are trainable and can outperform classical Boltzmann machines, the training procedure proposed therein suffers two drawbacks. First, it is unable to learn quantum terms from classical data. Second, the transverse Ising models considered are widely believed to be simulatable using quantum Monte-Carlo methods. This means that such models are arguably not quantum and as such the benchmarks they give do not necessarily apply to manifestly quantum models. Here we rectify these issues by giving new training methods that do not suffer these drawbacks and illustrate their performance for models that are manifestly quantum.The first, and arguably most important, task when approach...
We study the glued-trees problem of Childs, et. al. [1] in the adiabatic model of quantum computing and provide an annealing schedule to solve an oracular problem exponentially faster than classically possible. The Hamiltonians involved in the quantum annealing do not suffer from the socalled sign problem. Unlike the typical scenario, our schedule is efficient even though the minimum energy gap of the Hamiltonians is exponentially small in the problem size. We discuss generalizations based on initial-state randomization to avoid some slowdowns in adiabatic quantum computing due to small gaps.PACS numbers: 03.67. Ac, 03.67.Lx, 42.50.Lc Quantum annealing is a powerful heuristic to solve problems in optimization [2,3]. In quantum computing, the method consists of preparing a low-energy or ground state |ψ of a quantum system such that, after a simple measurement, the optimal solution is obtained with large probability. |ψ is prepared by following a particular annealing schedule, with a parametrized Hamiltonian path subject to initial and final conditions. A ground state of the initial Hamiltonian is then transformed to |ψ by varying the parameter adiabatically. In contrast to more general quantum adiabatic state transformations, the Hamiltonians along the path in quantum annealing are termed stoquastic and do not suffer from the so-called numerical sign problem [4]: for a specified basis, the offdiagonal Hamiltonian-matrix entries are nonpositive [5]. This property is useful for classical simulations [3].A sufficient condition for convergence of the quantum method is given by the quantum adiabatic approximation. It asserts that, if the rate of change of the Hamiltonian scales with the energy gap ∆ between their two lowest-energy states, |ψ can be prepared with controlled accuracy [6,7]. Such an approximation may also be necessary [8]. However, it could result in undesired overheads if ∆ is small but transitions between the lowestenergy states are forbidden due to selection rules, or if transitions between lowest-energy states can be exploited to prepare |ψ . The latter case corresponds to the annealing schedule in this Letter. It turns out that the relevant energy gap for the adiabatic approximation in these cases is not ∆ and can be much bigger.Because of the properties of the Hamiltonians, the annealing can also be simulated using probabilistic classical methods such as quantum Monte-Carlo (QMC) [9]. The goal in QMC is to sample according to the distribution of the ground state, i.e. with probabilities coming from amplitudes squared. While we lack of necessary conditions that guarantee convergence, the power of QMC is widely recognized [3,9,10]. In fact, if the Hamiltonians satisfy an additional frustration-free property, efficient QMC simulations for quantum annealing exist [11,12]. This places a doubt on whether a quantum-computer simulation of general quantum annealing processes can ever be done using substantially less resources than QMC or any other classical simulation.Towards answering this question, we...
We argue that an excess in entanglement between the visible and hidden units in a Quantum Neural Network can hinder learning. In particular, we show that quantum neural networks that satisfy a volume-law in the entanglement entropy will give rise to models not suitable for learning with high probability. Using arguments from quantum thermodynamics, we then show that this volume law is typical and that there exists a barren plateau in the optimization landscape due to entanglement. More precisely, we show that for any bounded objective function on the visible layers, the Lipshitz constants of the expectation value of that objective function will scale inversely with the dimension of the hidden-subsystem with high probability. We show how this can cause both gradient descent and gradient-free methods to fail. We note that similar problems can happen with quantum Boltzmann machines, although stronger assumptions on the coupling between the hidden/visible subspaces are necessary. We highlight how pretraining such generative models may provide a way to navigate these barren plateaus.
Modeling low energy eigenstates of fermionic systems can provide insight into chemical reactions and material properties and is one of the most anticipated applications of quantum computing. We present three techniques for reducing the cost of preparing fermionic Hamiltonian eigenstates using phase estimation. First, we report a polylogarithmic-depth quantum algorithm for antisymmetrizing the initial states required for simulation of fermions in first quantization. This is an exponential improvement over the previous state-of-the-art. Next, we show how to reduce the overhead due to repeated state preparation in phase estimation when the goal is to prepare the ground state to high precision and one has knowledge of an upper bound on the ground state energy that is less than the excited state energy (often the case in quantum chemistry). Finally, we explain how one can perform the time evolution necessary for the phase estimation based preparation of Hamiltonian eigenstates with exactly zero error by using the recently introduced qubitization procedure.
Practical quantum computing will require error rates well below those achievable with physical qubits. Quantum error correction1,2 offers a path to algorithmically relevant error rates by encoding logical qubits within many physical qubits, for which increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low for logical performance to improve with increasing code size. Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6 logical error per cycle floor set by a single high-energy event (1.6 × 10−7 excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.
No abstract
We provide a general method for efficiently simulating time-dependent Hamiltonian dynamics on a circuit-model based quantum computer. Our approach is based on approximating the truncated Dyson series of the evolution operator, extending the earlier proposal by Berry et al. [Phys. Rev. Lett. 114, 090502 (2015)] to evolution generated by explicitly time-dependent Hamiltonians. Two alternative strategies are proposed to implement time ordering while exploiting the superposition principle for sampling the Hamiltonian at different times. The resource cost of our simulation algorithm retains the optimal logarithmic dependence on the inverse of the desired precision. I. INTRODUCTIONSimulation of physical systems is envisioned to be one of the main applications for quantum computers [1]. Effective modeling of the dynamics and its generated time evolution is crucial to a deeper understanding of many-body systems, spin models, and quantum chemistry [2], and may thus have significant implications for many areas of chemistry and materials sciences. Simulation of the intrinsic Hamiltonian evolution of quantum systems was the first potential use of quantum computers suggested by Feynman [3] in 1982. Quantum simulation algorithms can model the evolution of a physical system with a complexity logarithmic in the dimension of the Hilbert space [4] (i.e. polynomial in the number of particles), unlike classical algorithms whose complexity is typically polynomial in the dimension, making simulations for practically interesting systems intractable for classical computers.The first quantum simulation algorithm was proposed by Lloyd [5] in 1996. There have been numerous advances since then providing improved performance [6][7][8][9][10][11][12][13][14][15]. One advance was to provide complexity that scales logarithmically in the error, and is nearly optimal in all other parameters [13]. Further improvements were provided by quantum signal processing methodology [14,16] as well as qubitization [15], which achieve optimal query complexity.An important case is that of time-dependent Hamiltonians. Efficient simulation of time-dependent Hamiltonians would allow us to devise better quantum control schemes [17] and describe transition states of chemical reactions [18]. Furthermore, simulation of dynamics generated by time-dependent Hamiltonians is a key component for implementing adiabatic algorithms [19] and the quantum approximate optimization algorithm [20] in a gate-based quantum circuit architecture.The most recent advances in quantum simulation algorithms are for time-independent Hamiltonians. Techniques for simulating time-dependent Hamiltonians based on the Lie-Trotter-Suzuki decomposition were developed in [7,9], but the complexity scales polynomially with error. More recent advances providing complexity logarithmic in the error [11,13] mention that their techniques can be generalized to time-dependent scenarios, but do not analyze this case. The most recent algorithms [14,15] are not directly applicable to the time-dependent case. H...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.