Quantum computers can exploit a Hilbert space whose dimension increases exponentially with the number of qubits. In experiment, quantum supremacy has recently been achieved by the Google team by using a noisy intermediate-scale quantum (NISQ) device with over 50 qubits. However, the question of what can be implemented on NISQ devices is still not fully explored, and discovering useful tasks for such devices is a topic of considerable interest. Hybrid quantum-classical algorithms are regarded as well-suited for execution on NISQ devices by combining quantum computers with classical computers, and are expected to be the first useful applications for quantum computing. Meanwhile, mitigation of errors on quantum processors is also crucial to obtain reliable results. In this article, we review the basic results for hybrid quantum-classical algorithms and quantum error mitigation techniques. Since quantum computing with NISQ devices is an actively developing field, we expect this review to be a useful basis for future studies.
As the advances in quantum hardware bring us into the noisy intermediate-scale quantum (NISQ) era, one possible task we can perform without quantum error correction using NISQ machines is the variational quantum eigensolver (VQE) due to its shallow depth. A specific problem that we can tackle is the strongly interacting Fermi-Hubbard model, which is classically intractable and has practical implications in areas like superconductivity. In this paper, we outline the details about the gate sequence, the measurement scheme, and the relevant error-mitigation techniques for the implementation of the Hubbard VQE on a NISQ platform. We perform resource estimation for both silicon spin qubits and superconducting qubits for a 50-qubit simulation, which cannot be solved exactly via classical means, and find similar results. The number of two-qubit gates required is on the order of 20 000. Hence, to suppress the mean circuiterror count to a level such that we can obtain meaningful results with the aid of error mitigation, we need to achieve a two-qubit gate error rate of approximately 10 −4. When searching for the ground state, we need a few days for one gradient-descent iteration, which is impractical. This can be reduced to around 10 min if we distribute our task among hundreds of quantum-processing units. Hence, implementing a 50-qubit Hubbard model VQE on a NISQ machine can be on the brink of being feasible in near term, but further optimization of our simulation scheme, improvements in the gate fidelity, improvements in the optimization scheme and advances in the error-mitigation techniques are needed to overcome the remaining obstacles. The scalability of the hardware platform is also essential to overcome the runtime issue via parallelization, which can be done on one single silicon multicore processor or across multiple superconducting processors.
A design for a large-scale surface code quantum processor based on a node/network approach is introduced for semiconductor quantum dot spin qubits. The minimal node contains only seven quantum dots, and nodes are separated on the micron scale, creating useful space for wiring interconnects and integration of conventional transistor circuits. Entanglement is distributed between neighbouring nodes by loading spin singlets locally and then shuttling one member of the pair through a linear array of empty dots. A node contains one data qubit, two ancilla qubits, and additional dots to facilitate electron shuttling and measurement of the ancillas. A four-node GHZ state is realized by sharing three internode singlets followed by local gate operations and ancilla measurements. Further local operations produce an X or Z stabilizer on the four data qubits, which is the fundamental operation of the surface code. Electron shuttling is simulated in the single-valley case using a simple gate electrode geometry without explicit barrier gates, and demonstrates that adiabatic transport is possible on timescales that do not present a speed bottleneck to the processor. An important shuttling error in a clean system is uncontrolled phase rotation of the spin due to modulation of the electronic g-factor during transport, owing to the Stark effect. This error can be reduced by appropriate electrostatic tuning of the stationary electron's g-factor. arXiv:1807.09941v2 [quant-ph]
Noise in quantum hardware remains the biggest roadblock for the implementation of quantum computers. To fight the noise in the practical application of near-term quantum computers, instead of relying on quantum error correction which requires large qubit overhead, we turn to quantum error mitigation, in which we make use of extra measurements. Error extrapolation is an error mitigation technique that has been successfully implemented experimentally. Numerical simulation and heuristic arguments have indicated that exponential curves are effective for extrapolation in the large circuit limit with an expected circuit error count around unity. In this Article, we extend this to multi-exponential error extrapolation and provide more rigorous proof for its effectiveness under Pauli noise. This is further validated via our numerical simulations, showing orders of magnitude improvements in the estimation accuracy over single-exponential extrapolation. Moreover, we develop methods to combine error extrapolation with two other error mitigation techniques: quasi-probability and symmetry verification, through exploiting features of these individual techniques. As shown in our simulation, our combined method can achieve low estimation bias with a sampling cost multiple times smaller than quasi-probability while without needing to be able to adjust the hardware error rate as required in canonical error extrapolation.
Twirling is a technique widely used for converting arbitrary noise channels into Pauli channels in error threshold estimations of quantum error correction codes. It is vitally useful both in real experiments and in classical quantum simulations. Minimising the size of the twirling gate set increases the efficiency of simulations and in experiments it might reduce both the number of runs required and the circuit depth (and hence the error burden). Conventional twirling uses the full set of Pauli gates as the set of twirling gates. This article provides a theoretical background for Pauli twirling and a way to construct a twirling gate set with a number of members comparable to the size of the Pauli basis of the given error channel, which is usually much smaller than the full set of Pauli gates. We also show that twirling is equivalent to stabiliser measurements with discarded measurement results, which enables us to further reduce the size of the twirling gate set.
Coherent noise can be much more damaging than incoherent (probabilistic) noise in the context of quantum error correction. One solution is to use twirling to turn coherent noise into incoherent Pauli channels. In this article, we argue that if twirling can improve the logical fidelity versus a given noise model, we can always achieve an even higher logical fidelity by simply sandwiching the noise with a chosen pair of Pauli gates, which we call Pauli conjugation. We devise a way to search for the optimal Pauli conjugation scheme and apply it to Steane code, 9-qubit Shor code and distance-3 surface code under global coherent Z noise. The optimal conjugation schemes show improvement in logical fidelity over twirling while the weights of the conjugation gates we need to apply are lower than the average weight of the twirling gates. In our example noise and codes, the concatenated threshold obtained using conjugation is consistently higher than the twirling threshold and can be up to 1.5 times higher than the original threshold where no mitigation is applied. Our simulations show that Pauli conjugation can be robust against gate errors and its advantages over twirling persist as we go to multiple rounds of quantum error correction. Pauli conjugation can be viewed as dynamical decoupling applied to the context of quantum error correction, in which our objective changes from maximising the physical fidelity to maximising the logical fidelity. The approach may be helpful in adapting other noise tailoring techniques in the quantum control theory into quantum error correction. arXiv:1906.06270v1 [quant-ph]
Even with the recent rapid developments in quantum hardware, noise remains the biggest challenge for the practical applications of any near-term quantum devices. Full quantum error correction cannot be implemented in these devices due to their limited scale. Therefore instead of relying on engineered code symmetry, symmetry verification was developed which uses the inherent symmetry within the physical problem we try to solve. In this article, we develop a general framework named symmetry expansion which provides a wide spectrum of symmetry-based error mitigation schemes beyond symmetry verification, enabling us to achieve different balances between the estimation bias and the sampling cost of the scheme. We show that certain symmetry expansion schemes can achieve a smaller estimation bias than symmetry verification through cancellation between the biases due to the detectable and undetectable noise components. A practical way to search for such a small-bias scheme is introduced. By numerically simulating the Fermi-Hubbard model for energy estimation, the small-bias symmetry expansion we found can achieve an estimation bias 6 to 9 times below what is achievable by symmetry verification when the average number of circuit errors is between 1 to 2. The corresponding sampling cost for random shot noise reduction is just 2 to 6 times higher than symmetry verification. Beyond symmetries inherent to the physical problem, our formalism is also applicable to engineered symmetries. For example, the recent scheme for exponential error suppression using multiple noisy copies of the quantum device is just a special case of symmetry expansion using the permutation symmetry among the copies.
Thin film magnetic heterostructures with competing interfacial coupling and Zeeman energy provide a fertile ground to study phase transition between different equilibrium states as a function of external magnetic field and temperature. A rare-earth (RE)/transition metal (TM) ferromagnetic multilayer is a classic example where the magnetic state is determined by a competition between the Zeeman energy and antiferromagnetic interfacial exchange coupling energy. Technologically, such structures offer the possibility to engineer the macroscopic magnetic response by tuning the microscopic interactions between the layers. We have performed an exhaustive study of nickel/gadolinium as a model system for understanding RE/TM multilayers using the element-specific measurement technique x-ray magnetic circular dichroism, and determined the full magnetic state diagrams as a function of temperature and magnetic layer thickness. We compare our results to a modified Stoner-Wohlfarth-based model and provide evidence of a thickness-dependent transition to a magnetic fan state which is critical in understanding magnetoresistance effects in RE/TM systems. The results provide important insight for spintronics and superconducting spintronics where engineering tunable magnetic inhomogeneity is key for certain applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.