The complexity of quantum states has become a key quantity of interest across various subfields of physics, from quantum computing to the theory of black holes. The evolution of generic quantum systems can be modelled by considering a collection of qubits subjected to sequences of random unitary gates. Here we investigate how the complexity of these random quantum circuits increases by considering how to construct a unitary operation from Haar-random two-qubit quantum gates. Implementing the unitary operation exactly requires a minimal number of gates—this is the operation’s exact circuit complexity. We prove a conjecture that this complexity grows linearly, before saturating when the number of applied gates reaches a threshold that grows exponentially with the number of qubits. Our proof overcomes difficulties in establishing lower bounds for the exact circuit complexity by combining differential topology and elementary algebraic geometry with an inductive construction of Clifford circuits.
An accurate calculation of the properties of quantum many-body systems is one of the most important yet intricate challenges of modern physics and computer science. In recent years, the tensor network ansatz has established itself as one of the most promising approaches enabling striking efficiency of simulating static properties of one-dimensional systems and abounding numerical applications in condensed matter theory. In higher dimensions, however, a connection to the field of computational complexity theory has shown that the accurate normalization of the two-dimensional tensor networks called projected entangled pair states (PEPS) is #Pcomplete. Therefore, an efficient algorithm for PEPS contraction would allow to solve exceedingly difficult combinatorial counting problems, which is considered highly unlikely. Due to the importance of understanding two-and three-dimensional systems the question currently remains: Are the known constructions typical of states relevant for quantum many-body systems? In this work, we show that an accurate evaluation of normalization or expectation values of PEPS is as hard to compute for typical instances as for special configurations of highest computational hardness. We discuss the structural property of average-case hardness in relation to the current research on efficient algorithms attempting tensor network contraction, hinting at a wealth of possible further insights into the average-case hardness of important problems in quantum many-body theory.
The dynamical structure factor is one of the experimental quantities crucial in scrutinizing the validity of the microscopic description of strongly correlated systems. However, despite its long-standing importance, it is exceedingly difficult in generic cases to numerically calculate it, ensuring that the necessary approximations involved yield a correct result. Acknowledging this practical difficulty, we discuss in what way results on the hardness of classically tracking time evolution under local Hamiltonians are precisely inherited by dynamical structure factors and, hence, offer in the same way the potential computational capabilities that dynamical quantum simulators do: We argue that practically accessible variants of the dynamical structure factors are bounded-error quantum polynomial time (BQP)-hard for general local Hamiltonians. Complementing these conceptual insights, we improve upon a novel, readily available measurement setup allowing for the determination of the dynamical structure factor in different architectures, including arrays of ultra-cold atoms, trapped ions, Rydberg atoms, and superconducting qubits. Our results suggest that quantum simulations employing near-term noisy intermediate-scale quantum devices should allow for the observation of features of dynamical structure factors of correlated quantum matter in the presence of experimental imperfections, for larger system sizes than what is achievable by classical simulation.
in the SQ model is often taken as strong evidence for hardness in the sample model.In summary, we study in this work the following problems, which are stated more formally in Section 3:Problems: PAC probabilistic modelling of quantum circuit Born machines (informal). Let C be the set of output distributions corresponding to a class of local quantum circuits. Given either sample-oracle or SQ-oracle access to some unknown distribution P ∈ C, output, with high probability, either generative modelling: an efficient generator, or density modelling: an efficient evaluator for a distribution P which is sufficiently close to P .If there exists either a sample or computationally efficient algorithm which, with respect to either the sample oracle or the SQ oracle, solves the generative (density) modelling problem associated with a given set of distributions C, then we say that C is sample or computationally efficiently generator (evaluator) learnable within the relevant oracle model. We are particularly interested in this work in establishing the existence or non-existence, of efficient quantum or classical learning algorithms, for the output distributions of various classes of local quantum circuits, within both the sample and statistical query model. Main resultsGiven this motivation and context, we provide two main results, which stated informally, are as follows:Result 1 (Informal version of Corollary 1). The concept class consisting of the output distributions of super-logarithmic depth nearest neighbour Clifford circuits is not sample efficiently PAC generator-learnable or evaluator-learnable, in the statistical query model.Result 2 (Informal version of Theorem 2). The concept class consisting of the output distributions of nearest neighbour Clifford circuits is both sample and computationally efficiently classically PAC generator-learnable and evaluator-learnable, in the sample model.
Quantum complexity is emerging as a key property of many-body systems, including black holes, topological materials, and early quantum computers. A state's complexity quantifies the number of computational gates required to prepare the state from a simple tensor product. The greater a state's distance from maximal complexity, or "uncomplexity," the more useful the state is as input to a quantum computation. Separately, resource theories-simple models for agents subject to constraints-are burgeoning in quantum information theory. We unite the two domains, confirming Brown and Susskind's conjecture that a resource theory of uncomplexity can be defined. The allowed operations, fuzzy operations, are slightly random implementations of two-qubit gates chosen by an agent. We formalize two operational tasks, uncomplexity extraction and expenditure. Their optimal efficiencies depend on an entropy that we engineer to reflect complexity. We also present two monotones, uncomplexity measures that decline monotonically under fuzzy operations, in certain regimes. This work unleashes on many-body complexity the resource-theory toolkit from quantum information theory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.