Many physical implementations of qubits-including ion traps, optical lattices and linear opticssuffer from loss. A nonzero probability of irretrievably losing a qubit can be a substantial obstacle to fault-tolerant methods of processing quantum information, requiring new techniques to safeguard against loss that introduce an additional overhead that depends upon the loss rate. Here we present a scalable and platform-independent protocol for estimating the average loss rate (averaged over all input states) resulting from an arbitrary Markovian noise process, as well as an independent estimate of detector efficiency. Moreover, we show that our protocol gives an additional constraint on estimated parameters from randomized benchmarking that improves the reliability of the estimated error rate and provides a new indicator for non-Markovian signatures in the experimental data. We also derive a bound for the state-dependent loss rate in terms of the average loss rate.PACS numbers: 03.65. Aa, 03.65.Wj, 03.65.Yz, 03.67.Lx, 03.67.Pp In order to build practical devices for processing and transmitting quantum information, the rate of decoherence and other errors must be below certain fault-tolerant thresholds. In particular, many experimental implementations of qubits-such as ion traps [1,2], optical lattices [3] and linear optics [4]-suffer from irretrievable loss, that is, there is a nonzero probability of the qubit vanishing (as opposed to leaking to other energy levels). Such loss of normalization can be a substantial obstacle to many quantum information protocols, requiring different error-correction techniques to achieve faulttolerance [4][5][6]. For example, the surface code may not be used directly if there is any probability of losing a qubit, while for the topological cluster states, loss rates of less than 1% are required to avoid impractical overheads [6].However, there are two substantial challenges in characterizing loss. Firstly, the loss rate may depend on the state of the qubit, such as when a qubit is encoded in a superposition of vacuum and single-photon states. Secondly, the loss due to imperfect operations has to be distinguished from the inefficiency of the detector [7]. Quantum process tomography [8,9] could be used to characterize loss, however, it is inefficient in the number of qubits and is sensitive to state preparation and measurement (SPAM) errors [10] and so cannot distinguish between loss due to imperfect operations and inefficient detectors.In this Letter, we present a robust and efficient protocol that characterizes the loss rate due to imperfect operations averaged over input states. Our protocol is platform-independent, simple to implement and analyze and only assumes that the noise is Markovian. We begin by defining survival rates and then present our protocol and derive the associated analytical decay curve under the assumption of Markovian noise. We then prove that the average loss rate estimated via our protocol provides a practical bound on the loss rate for any state. Since our ...
Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational 'qubit', the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.An important error mechanism in many experimental implementations of quantum information is leakage, that is, transitions into and out of the Hilbert space under consideration (e.g., an electron excitation to another energy level). Subsequent transitions back into the Hilbert space introduce a memory effect, making leakage a fundamentally non-Markovian process. Such leakage errors can be a substantial obstacle to fault-tolerant computation [1-3].There are platform-dependent methods for characterizing leakage in many of the leading experimental approaches to quantum computation, such as ion trap qubits [4], superconducting qubits [5, 6] and quantum dots [7]. However, these approaches all have disadvantages such as being platform-dependent, scaling exponentially in the number of qubits, being sensitive to state-preparation and measurement errors (SPAM) or assuming a specific error model.Randomized benchmarking (RB) [8-10] has been specifically developed to avoid all of these pitfalls at the cost of obtaining only partial information-namely, the average gate fidelity-about the errors in the absence of leakage. In the presence of leakage, the standard fidelity decay curve in RB breaks down [11], although the RB protocol can be modified to account for leakage errors [12].We present a protocol that provides an estimate of the average leakage rate for coherent leakage over a given set of quantum gates. We consider computational and leakage spaces of arbitrary dimensions, so that our protocol can be applied to both physical and logical qudit systems. We demonstrate that our protocol produces reliable estimates of leakage rates through numerical simulations of our protocol for specific, adversarial, error models.Note that after the protocol below first appeared online, an alternative heuristic protocol was presented in [13]. While the heuristic protocol applies to specific experimental scenarios, the current protocol is both fully rigorous and expressed in terms of platform-independent experimental capabilities. Defining leakage ratesMany experimental implementations of logical d 1 -level qudits (typically = d 2 1 , giving a qubit) are embedded in a d-level quantum system by taking only the first d 1 levels ñ ¼ ñ | |d 1 , , 1 . Formally, we can consider a OPEN ACCESS RECEIVED
A graduate course in quantum mechanics has been taught to groups of 10–12 students by a self-pacing method. The core of the system is a comprehensive set of problems, with credit being given for the problem only after it has been correctly worked. Scores on an examination indicate that the course is successful in teaching core material at a high mastery level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.