Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware. I. INTRODUCTIONQuantum computing has emerged as a solution to accelerate various calculations using systems governed by quantum mechanics. Such calculations are believed to take exponential time to perform using classical computers. Initial applications where quantum computing will be useful are simulation of quantum physics [1], cryptanalysis [2, 3] and unstructured search [4], and there is a growing set of other quantum algorithms [5].Simple quantum algorithms have been shown to scale better than classical algorithms [6-8] for small test cases, though larger computers are required to solve real-world problems. The main obstacle to scalability is that the required quantum operations (state preparations, singleand two-qubit unitary gates, and measurements) are subject to external noise, therefore quantum algorithms cannot run with perfect fidelity. This requires quantum computers to use active error correction [9] to achieve scalability, which in turn requires a classical co-processor to infer which corrections to make, given a stream of measurement results as input. If this co-processor is slow, performance of the quantum computer may be degraded (though recent results [10] suggest that this may be mitigated).The remainder of this paper is organized as follows. In Section II, we outline the relevant aspects of quantum error correction and fault tolerance. We discuss the need for a fast classical coprocessor in Section III. In Section IV, we give a brief summary of existing techniques to perform decoding quickly, and follow this in Section V with the introduction of a new technique based on feedforward neural networks. We examine the accuracy of the proposed decoder in Section VI, and conclude by discussing its speed in Section VII. II. QUANTUM ERROR CORRECTIONWhile it is often possible to decrease the amount of noise affecting a quantum operation using advanced control techniques [11,12], their analog nature suggests that some imperfection will always remain. This has driven the development of algorithmic techniques to protect quantum states and computations from noise, which are called quantum error correction and fault tolerance, respectively.
In previous work, we proposed a method for leveraging efficient classical simulation algorithms to aid in the analysis of large-scale fault tolerant circuits implemented on hypothetical quantum information processors. Here, we extend those results by numerically studying the efficacy of this proposal as a tool for understanding the performance of an error-correction gadget implemented with fault models derived from physical simulations. Our approach is to approximate the arbitrary error maps that arise from realistic physical models with errors that are amenable to a particular classical simulation algorithm in an "honest" way; that is, such that we do not underestimate the faults introduced by our physical models. In all cases, our approximations provide an "honest representation" of the performance of the circuit composed of the original errors. This numerical evidence supports the use of our method as a way to understand the feasibility of an implementation of quantum information processing given a characterization of the underlying physical processes in experimentally accessible examples.
The large-scale execution of quantum algorithms requires basic quantum operations to be implemented fault-tolerantly. The most popular technique for accomplishing this, using the devices that can be realized in the near term, uses stabilizer codes which can be embedded in a planar layout. The set of fault-tolerant operations which can be executed in these systems using unitary gates is typically very limited. This has driven the development of measurement-based schemes for performing logical operations in these codes, known as lattice surgery and code deformation. In parallel, gauge fixing has emerged as a measurement-based method for performing universal gate sets in subsystem stabilizer codes. In this work, we show that lattice surgery and code deformation can be expressed as special cases of gauge fixing, permitting a simple and rigorous test for fault-tolerance together with simple guiding principles for the implementation of these operations. We demonstrate the accuracy of this method numerically with examples based on the surface code, some of which are novel.Several such families of quantum error-correcting codes have been developed, including concatenated codes [5,6], subsystem codes such as Bacon-Shor codes [7], and 2D topological codes. The most prominent 2D topological codes are surface codes [8] derived from Kitaev's toric code [9], which we will focus on in the remainder of this manuscript. 2D topological codes can be implemented using entangling gates which are local in two dimensions, allowing fault-tolerance in near-term devices which have limited connectivity. In addition, 2D topological codes generally have high fault-tolerant memory thresholds, with the surface code having the highest at ∼1% [10].These advantages come at a cost, however. While other 2D topological codes permit logical single-qubit Clifford operations to be implemented transversally, the surface code does not. In addition, the constraint that computation be carried out in a single plane does not permit two-qubit physical gates to be carried out between physical qubits in different code blocks, precluding the two-qubit gates which, in principle, can be carried out transversally.These two restrictions have led to the design of measurement-based protocols for performing single-and two-qubit logical gates by making gradual changes to the underlying stabilizer code. Measurement-based protocols that implement single-qubit gates are typically called code deformation [11], and protocols that involve multiple logical qubits are usually called lattice surgery [12]. A separate measurement-based technique, called gauge fixing [13], can be applied to subsystem codes, which have operators which can be added to or removed from the stabilizer group as desired, the so-called gauge operators. During gauge fixing, the stabilizer generators of the subsystem code remain unchanged, and can be used to detect and correct errors; so decoding is unaffected by gauge fixing. This is in contrast to code deformation and lattice surgery, where it is not a pri...
Fault tolerance is a prerequisite for scalable quantum computing. Architectures based on 2D topological codes are effective for near-term implementations of fault tolerance. To obtain high performance with these architectures, we require a decoder which can adapt to the wide variety of error models present in experiments. The typical approach to the problem of decoding the surface code is to reduce it to minimum-weight perfect matching in a way that provides a suboptimal threshold error rate, and is specialized to correct a specific error model. Recently, optimal threshold error rates for a variety of error models have been obtained by methods which do not use minimum-weight perfect matching, showing that such thresholds can be achieved in polynomial time.It is an open question whether these results can also be achieved by minimum-weight perfect matching. In this work, we use belief propagation and a novel algorithm for producing edge weights to increase the utility of minimum-weight perfect matching for decoding surface codes. This allows us to correct depolarizing errors using the rotated surface code, obtaining a threshold of 17.76 ± 0.02%. This is larger than the threshold achieved by previous matching-based decoders (14.88±0.02%), though still below the known upper bound of ∼ 18.9%.A surface code [9-11] is supported on a square array of qubits, with stabilisers defined on individual square tiles, see Figure 1. Specifically, we focus on the rotated surface code, with the boundary conditions from Figure 1. Earlier constructions involve tiling a surface with nontrivial topology, such as a torus [8], or using an alternate open boundary condition with smooth and rough boundaries [10]. We choose to study the rotated code, since it requires the fewest physical qubits per logical qubit.
A quantum computer needs the assistance of a classical algorithm to detect and identify errors that affect encoded quantum information. At this interface of classical and quantum computing the technique of machine learning has appeared as a way to tailor such an algorithm to the specific error processes of an experiment-without the need for a priori knowledge of the error model. Here, we apply this technique to topological color codes. We demonstrate that a recurrent neural network with long short-term memory cells can be trained to reduce the error rate ò L of the encoded logical qubit to values much below the error rate ò phys of the physical qubits-fitting the expected power law scaling µ + ( ) d L phys 1 2 , with d the code distance. The neural network incorporates the information from 'flag qubits' to avoid reduction in the effective code distance caused by the circuit. As a test, we apply the neural network decoder to a density-matrix based simulation of a superconducting quantum computer, demonstrating that the logical qubit has a longer life-time than the constituting physical qubits with near-term experimental parameters.
Quantum information processors have the potential to drastically change the way we communicate and process information. Nuclear magnetic resonance (NMR) has been one of the first experimental implementations of quantum information processing (QIP) and continues to be an excellent testbed to develop new QIP techniques. We review the recent progress made in NMR QIP, focusing on decoupling, pulse engineering and indirect nuclear control. These advances have enhanced the capabilities of NMR QIP, and have useful applications in both traditional NMR and other QIP architectures.
While the on-chip processing power in circuit QED devices is growing rapidly, an open challenge is to establish high-fidelity quantum links between qubits on different chips. Here, we show entanglement between transmon qubits on different cQED chips with 49% concurrence and 73% Bell-state fidelity. We engineer a half-parity measurement by successively reflecting a coherent microwave field off two nearly-identical transmon-resonator systems. By ensuring the measured output field does not distinguish |01 from |10 , unentangled superposition states are probabilistically projected onto entangled states in the odd-parity subspace. We use in-situ tunability and an additional weakly coupled driving field on the second resonator to overcome imperfect matching due to fabrication variations. To demonstrate the flexibility of this approach, we also produce an even-parity entangled state of similar quality, by engineering the matching of outputs for the |00 and |11 states. The protocol is characterized over a range of measurement strengths using quantum state tomography showing good agreement with a comprehensive theoretical model. arXiv:1712.06141v1 [quant-ph]
We analyze the properties of a 2D topological code derived by concatenating the 4, 2, 2 code with the toric/surface code, or alternatively by removing check operators from the 2D square-octagon or 4.8.8 color code. We show that the resulting code has a circuit-based noise threshold of ∼ 0.41% (compared to ∼ 0.6% for the toric code in a similar scenario), which is higher than any known 2D color code. We believe that the construction may be of interest for hardware in which one wants to use both long-range two-qubit gates as well as short-range gates between small clusters of qubits.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.