We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rd log 2 d) measurement settings, compared to standard methods that require d 2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low-rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed. We present both theoretical bounds and numerical simulations.The tasks of reconstructing the quantum states and processes produced by physical systems -known respectively as quantum state and process tomography [1] -are of increasing importance in physics and especially in quantum information science. Tomography has been used to characterize the quantum state of trapped ions [2] and an optical entangling gate [3] among many other implementations. But a fundamental difficulty in performing tomography on many-body systems is the exponential growth in the state space dimension. For example, to get a maximum-likelihood estimate of a quantum state of 8 ions, Ref.[2] required hundreds of thousands of measurements and weeks of post-processing.Still, one might hope to overcome this obstacle, because the vast majority of quantum states are not of physical interest. Rather, one is often interested in states with special properties: pure states, states with particular symmetries, ground states of local Hamiltonians, etc., and tomography might be more efficient in such special cases [4].In particular, consider pure or nearly pure quantum states, i.e., states with low entropy. More precisely, consider a quantum state that is essentially supported on an r-dimensional space, meaning the density matrix is close (in a given norm) to a matrix of rank r, where r is small. Such states arise in very common physical settings, e.g. a pure state subject to a local noise process [20].A standard implementation of tomography [5,6] would use d 2 or more measurement settings, where d = 2 n for an nqubit system. But a simple parameter counting argument suggests that O(rd) settings could possibly suffice -a significant improvement. However, it is not clear how to achieve this performance in practice, i.e., how to choose these measurements, or how to efficiently reconstruct the density matrix. For instance, the problem of finding a minimum-rank matrix subject to linear constraints is NP-hard in general [7].In addition to a reduction in experimental complexity, one might hope that a post-processing algorithm which takes as input only O(rd) ≪ d 2 numbers could be tuned to run considerably faster than standard methods. Since the output of the procedure is a low...
Quantum state tomography-deducing quantum states from measured data-is the gold standard for verification and benchmarking of quantum devices. It has been realized in systems with few components, but for larger systems it becomes unfeasible because the number of measurements and the amount of computation required to process them grows exponentially in the system size. Here, we present two tomography schemes that scale much more favourably than direct tomography with system size. one of them requires unitary operations on a constant number of subsystems, whereas the other requires only local measurements together with more elaborate post-processing. Both rely only on a linear number of experimental operations and post-processing that is polynomial in the system size. These schemes can be applied to a wide range of quantum states, in particular those that are well approximated by matrix product states. The accuracy of the reconstructed states can be rigorously certified without any a priori assumptions.
We describe a simple method for certifying that an experimental device prepares a desired quantum state . Our method is applicable to any pure state , and it provides an estimate of the fidelity between and the actual (arbitrary) state in the lab, up to a constant additive error. The method requires measuring only a constant number of Pauli expectation values, selected at random according to an importanceweighting rule. Our method is faster than full tomography by a factor of d, the dimension of the state space, and extends easily and naturally to quantum channels. DOI: 10.1103/PhysRevLett.106.230501 PACS numbers: 03.67.Ac, 03.65.Wj In recent years there has been substantial progress in preparing many-body entangled quantum states in the laboratory [1]. A key step in such experiments is to verify that the state of the system is the desired one. This can be done using quantum state tomography, or techniques such as entanglement witnesses [2]. However, in many cases these solutions are not fully satisfactory. Tomography gives complete information about the state, but it is very resource-intensive, and has difficulty scaling to large systems. Entanglement witnesses can be much easier to implement, but are not a generic solution since known constructions only work for special quantum states.Here we propose a new method, direct fidelity estimation, that is much faster than tomography, is applicable to a large class of quantum states, and requires minimal experimental resources. Let us first describe the setting of the problem. Consider a system of n qubits, with Hilbert space dimension d ¼ 2 n , and let be the desired state, i.e., the state we hope to accurately prepare. We make two basic assumptions. First, we assume that is pure. However, we do not assume any additional structure or symmetry, so our method goes beyond previous work [3,4] to encompass nearly all of the states of interest in experimental quantum information science (e.g., the GreenbergerHorne-Zeilinger (GHZ) and W states, stabilizer states, cluster states, matrix product states, projected entangled pair states, etc.) in a unified framework. Second, we assume that we can measure n-qubit Pauli observables, that is, tensor products of single-qubit Pauli operators; we do not need to perform any other operations. Thus our method is applicable to any system that is capable of single-qubit gates and readout, without needing to rely on 2-qubit gates or entangled measurements.Our method works by measuring a random subset of Pauli observables chosen according to an ''importanceweighting'' rule. Roughly, we select Pauli operators that are most likely to detect deviations from the desired state . We use the resulting measurement statistics to estimate the fidelity Fð ; Þ, where is the actual state in the lab. Surprisingly, although there are 4 n distinct Pauli operators, we only need to sample a constant number of them to estimate Fð ; Þ up to a constant additive error, for arbitrary . That is, for every possible state , with high probability over the choice...
Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown lowrank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, weRecently, a new approach to tomography was proposed: compressed quantum tomography, based on techniques from compressed sensing [17,18]. The basic idea is to concentrate on states that are well approximated by density matrices of rank r d. This approach can be applied to many realistic experimental situations, where the ideal state of the system is pure, and physical constraints (e.g. low temperature or the locality of interactions) ensure that the actual (noisy) state still has low entropy.This approach is convenient because it does not require detailed knowledge about the system. However, note that when such knowledge is available, one can use alternative formulations of compressed tomography, with different notions of sparsity, to further reduce the dimensionality of the problem [19]. We will compare these methods in section 6.2.The main challenge in compressed tomography is how to exploit this low-rank structure, when one does not know the subspace on which the state is supported. Consider the example of a pure quantum state. Since pure states are specified by only O(d) numbers, it seems plausible that one could be reconstructed after measuring only O(d) observables, compared with O(d 2 ) for a general mixed state. While this intuition is indeed correct [20][21][22][23], it is a challenge to devise a practical tomography scheme that takes advantage of this. In particular, one is restricted to those measurements that can be easily performed in the laboratory; furthermore, one then has to find a pure state consistent with measured data [24], preferably by some procedure that is computationally efficient (note that, in general, finding minimum-rank solutions is NP-hard, hence computationally intactable [25]).Compressed tomography provides a soluti...
All comments are subject to release under the Freedom of Information Act (FOIA).ii Reports on Computer Systems TechnologyThe Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) promotes the U.S. economy and public welfare by providing technical leadership for the Nation's measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof of concept implementations, and technical analyses to advance the development and productive use of information technology. ITL's responsibilities include the development of management, administrative, technical, and physical standards and guidelines for the cost-effective security and privacy of other than national security-related information in federal information systems. AbstractIn recent years, there has been a substantial amount of research on quantum computersmachines that exploit quantum mechanical phenomena to solve mathematical problems that are difficult or intractable for conventional computers. If large-scale quantum computers are ever built, they will be able to break many of the public-key cryptosystems currently in use. This would seriously compromise the confidentiality and integrity of digital communications on the Internet and elsewhere. The goal of post-quantum cryptography (also called quantum-resistant cryptography) is to develop cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks. This Internal Report shares the National Institute of Standards and Technology (NIST)'s current understanding about the status of quantum computing and post-quantum cryptography, and outlines NIST's initial plan to move forward in this space. The report also recognizes the challenge of moving to new cryptographic infrastructures and therefore emphasizes the need for agencies to focus on crypto agility.
From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable. For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity. With recent technological developments, it is now possible to carry out such a loophole-free Bell test. Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10. These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.
We study the computational complexity of the N-representability problem in quantum chemistry. We show that this problem is quantum Merlin-Arthur complete, which is the quantum generalization of nondeterministic polynomial time complete. Our proof uses a simple mapping from spin systems to fermionic systems, as well as a convex optimization technique that reduces the problem of finding ground states to N representability.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.