Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware. I. INTRODUCTIONQuantum computing has emerged as a solution to accelerate various calculations using systems governed by quantum mechanics. Such calculations are believed to take exponential time to perform using classical computers. Initial applications where quantum computing will be useful are simulation of quantum physics [1], cryptanalysis [2, 3] and unstructured search [4], and there is a growing set of other quantum algorithms [5].Simple quantum algorithms have been shown to scale better than classical algorithms [6-8] for small test cases, though larger computers are required to solve real-world problems. The main obstacle to scalability is that the required quantum operations (state preparations, singleand two-qubit unitary gates, and measurements) are subject to external noise, therefore quantum algorithms cannot run with perfect fidelity. This requires quantum computers to use active error correction [9] to achieve scalability, which in turn requires a classical co-processor to infer which corrections to make, given a stream of measurement results as input. If this co-processor is slow, performance of the quantum computer may be degraded (though recent results [10] suggest that this may be mitigated).The remainder of this paper is organized as follows. In Section II, we outline the relevant aspects of quantum error correction and fault tolerance. We discuss the need for a fast classical coprocessor in Section III. In Section IV, we give a brief summary of existing techniques to perform decoding quickly, and follow this in Section V with the introduction of a new technique based on feedforward neural networks. We examine the accuracy of the proposed decoder in Section VI, and conclude by discussing its speed in Section VII. II. QUANTUM ERROR CORRECTIONWhile it is often possible to decrease the amount of noise affecting a quantum operation using advanced control techniques [11,12], their analog nature suggests that some imperfection will always remain. This has driven the development of algorithmic techniques to protect quantum states and computations from noise, which are called quantum error correction and fault tolerance, respectively.
In previous work, we proposed a method for leveraging efficient classical simulation algorithms to aid in the analysis of large-scale fault tolerant circuits implemented on hypothetical quantum information processors. Here, we extend those results by numerically studying the efficacy of this proposal as a tool for understanding the performance of an error-correction gadget implemented with fault models derived from physical simulations. Our approach is to approximate the arbitrary error maps that arise from realistic physical models with errors that are amenable to a particular classical simulation algorithm in an "honest" way; that is, such that we do not underestimate the faults introduced by our physical models. In all cases, our approximations provide an "honest representation" of the performance of the circuit composed of the original errors. This numerical evidence supports the use of our method as a way to understand the feasibility of an implementation of quantum information processing given a characterization of the underlying physical processes in experimentally accessible examples.
The large-scale execution of quantum algorithms requires basic quantum operations to be implemented fault-tolerantly. The most popular technique for accomplishing this, using the devices that can be realized in the near term, uses stabilizer codes which can be embedded in a planar layout. The set of fault-tolerant operations which can be executed in these systems using unitary gates is typically very limited. This has driven the development of measurement-based schemes for performing logical operations in these codes, known as lattice surgery and code deformation. In parallel, gauge fixing has emerged as a measurement-based method for performing universal gate sets in subsystem stabilizer codes. In this work, we show that lattice surgery and code deformation can be expressed as special cases of gauge fixing, permitting a simple and rigorous test for fault-tolerance together with simple guiding principles for the implementation of these operations. We demonstrate the accuracy of this method numerically with examples based on the surface code, some of which are novel.Several such families of quantum error-correcting codes have been developed, including concatenated codes [5,6], subsystem codes such as Bacon-Shor codes [7], and 2D topological codes. The most prominent 2D topological codes are surface codes [8] derived from Kitaev's toric code [9], which we will focus on in the remainder of this manuscript. 2D topological codes can be implemented using entangling gates which are local in two dimensions, allowing fault-tolerance in near-term devices which have limited connectivity. In addition, 2D topological codes generally have high fault-tolerant memory thresholds, with the surface code having the highest at ∼1% [10].These advantages come at a cost, however. While other 2D topological codes permit logical single-qubit Clifford operations to be implemented transversally, the surface code does not. In addition, the constraint that computation be carried out in a single plane does not permit two-qubit physical gates to be carried out between physical qubits in different code blocks, precluding the two-qubit gates which, in principle, can be carried out transversally.These two restrictions have led to the design of measurement-based protocols for performing single-and two-qubit logical gates by making gradual changes to the underlying stabilizer code. Measurement-based protocols that implement single-qubit gates are typically called code deformation [11], and protocols that involve multiple logical qubits are usually called lattice surgery [12]. A separate measurement-based technique, called gauge fixing [13], can be applied to subsystem codes, which have operators which can be added to or removed from the stabilizer group as desired, the so-called gauge operators. During gauge fixing, the stabilizer generators of the subsystem code remain unchanged, and can be used to detect and correct errors; so decoding is unaffected by gauge fixing. This is in contrast to code deformation and lattice surgery, where it is not a pri...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.