Noise mechanisms in quantum systems can be broadly characterized as either coherent (i.e., unitary) or incoherent. For a given fixed average error rate, coherent noise mechanisms will generally lead to a larger worst-case error than incoherent noise. We show that the coherence of a noise source can be quantified by the unitarity, which we relate to the average change in purity averaged over input pure states. We then show that the unitarity can be efficiently estimated using a protocol based on randomized benchmarking that is efficient and robust to state-preparation and measurement errors. We also show that the unitarity provides a lower bound on the optimal achievable gate infidelity under a given noisy process.
The performance requirements for faulttolerant quantum computing are very stringent. Qubits must be manipulated, coupled, and measured with error rates well below 1% [1,2]. For semiconductor implementations, silicon quantum dot spin qubits have demonstrated average singlequbit Clifford gate error rates that approach this threshold [3][4][5][6], notably with error rates of 0.14% in isotopically enriched 28 Si/SiGe devices [7]. This gate performance, together with high-fidelity two-qubit gates and measurements, is only known to meet the threshold for faulttolerant quantum computing in some architectures when assuming that the noise is incoherent, and still lower error rates are needed to reduce overhead. Here we experimentally show that pulse engineering techniques, widely used in magnetic resonance [8], improve average Clifford gate error rates for silicon quantum dot spin qubits to 0.043%,a factor of 3 improvement on previous best results for silicon quantum dot devices [7]. By including tomographically complete measurements in randomised benchmarking, we infer a higher-order feature of the noise called the unitarity, which measures the coherence of noise. This in turn allows us to theoretically predict that average gate error rates as low as 0.026% may be achievable with further pulse improvements. These fidelities are ultimately limited by Markovian noise, which we attribute to charge noise emanating from the silicon device structure itself, or the environment.Randomised benchmarking [9-12] is the gold standard for quantifying the performance of quantum gates, and can be used to efficiently obtain accurate estimates of the average gate fidelity in the high-accuracy regime independent of state preparation and measurement (SPAM) errors. The standard method for randomised benchmarking, however, is designed to provide only the average gate fidelity, and not any further details about the noise. To improve quantum gates further, one would also like diagnostic information about the character of the noise processes, i.e., its frequency spectrum, whether it is primarily due to environmental couplings or control errors, etc. Quantum tomography methods can provide such information but are in general inefficient and highly sensitive to SPAM errors. For these reasons, variants of randomised benchmarking that quantify higher-order noise features as well as the average gate fidelity have been developed [13][14][15].As an early example of this approach, the randomised benchmarking data of Ref.[3] demonstrating average Clifford gate fidelities of 99.59% in SiMOS qubits exhibited non-exponential decay features, which was subsequently attributed to low-frequency detuning noise in the system [16]. That is, randomised benchmarking of this device not only demonstrated its high performance, but also provided details of the noise characteristics. These details in turn suggest a method to further reduce the infidelity: Low frequency noise is particularly amenable to pulse engineering techniques, which exploit the quasistatic nature of ...
Genetic programming (GP) is not a field noted for the rigor of its benchmarking. Some of its benchmark problems are popular purely through historical contingency, and they can be criticized as too easy or as providing misleading information concerning real-world performance, but they persist largely because of inertia and the lack of good alternatives. Even where the problems themselves are impeccable, comparisons between studies are made more difficult by the lack of standardization. We argue that the definition of standard benchmarks is an essential step in the maturation of the field. We make several contributions towards this goal. We motivate the development of a benchmark suite and define its goals; we survey existing practice; we enumerate many candidate benchmarks; we report progress on reference implementations; and we set out a concrete plan for gathering feedback from the GP community that would, if adopted, lead to a standard set of benchmarks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.