Inflammation of renal interstitium and uveal tissue establishes the two components of tubulointerstitial nephritis and uveitis (TINU) syndrome. Although believed to occur more frequently in young females, a broad spectrum of patients can be affected. Both renal and eye disease can be asymptomatic and may not manifest simultaneously, having independent progressions. Renal disease manifests as acute kidney injury and may cause permanent renal impairment. Eye inflammation can manifest in different anatomical forms, most commonly as bilateral anterior uveitis and may progress to a chronic course. TINU syndrome accounts for approximately 1%–2% of uveitis in tertiary referral centres. A literature review covering the clinical features, pathogenesis, diagnosis and treatment is presented.
Current gate-based quantum computers have the potential to provide a computational advantage if algorithms use quantum hardware efficiently. To make combinatorial optimization more efficient, we introduce the Filtering Variational Quantum Eigensolver (F-VQE) which utilizes filtering operators to achieve faster and more reliable convergence to the optimal solution. Additionally we explore the use of causal cones to reduce the number of qubits required on a quantum computer. Using random weighted MaxCut problems, we numerically analyze our methods and show that they perform better than the original VQE algorithm and the Quantum Approximate Optimization Algorithm (QAOA). We also demonstrate the experimental feasibility of our algorithms on a Honeywell trapped-ion quantum processor.
Computing localizable entanglement for noisy many-particle quantum states is difficult due to the optimization over all possible sets of local projection measurements. Therefore, it is crucial to develop lower bounds, which can provide useful information about the behaviour of localizable entanglement, and which can be determined by measuring a limited number of operators, or by performing the least number of measurements on the state, preferably without performing a full state tomography. In this paper, we adopt two different yet related approaches to obtain a witness-based, and a measurementbased lower bounds for localizable entanglement. The former is determined by the minimal amount of entanglement that can be present in a subsystem of the multipartite quantum state, which is consistent with the expectation value of an entanglement witness. Determining this bound does not require any information about the state beyond the expectation value of the witness operator, which renders this approach highly practical in experiments. The latter bound of localizable entanglement is computed by restricting the local projection measurements over the qubits outside the subsystem of interest to a suitably chosen basis. We discuss the behaviour of both lower bounds against local physical noise on the qubits, and discuss their dependence on noise strength and system size. We also analytically determine the measurement-based lower bound in the case of graph states under local uncorrelated Pauli noise.
In this Letter, we establish and explore a new connection between quantum information theory and classical statistical mechanics by studying the problem of qubit losses in 2D topological color codes. We introduce a protocol to cope with qubit losses, which is based on the identification and removal of a twin qubit from the code, and which guarantees the recovery of a valid three-colorable and trivalent reconstructed color code. Moreover, we show that determining the corresponding qubit loss error threshold is equivalent to a new generalized classical percolation problem. We numerically compute the associated qubit loss thresholds for two families of 2D color code and find that with p=0.461±0.005 these are close to satisfying the fundamental limit of 50% as imposed by the no-cloning theorem. Our findings reveal a new connection between topological color codes and percolation theory, show high robustness of color codes against qubit loss, and are directly relevant for implementations of topological quantum error correction in various physical platforms.
Quantum information theory has shown strong connections with classical statistical physics. For example, quantum error correcting codes like the surface and the color code present a tolerance to qubit loss that is related to the classical percolation threshold of the lattices where the codes are defined. Here we explore such connection to study analytically the tolerance of the color code when the protocol introduced in [Phys. Rev. Lett. 121, 060501 (2018)] to correct qubit losses is applied. This protocol is based on the removal of the lost qubit from the code, a neighboring qubit, and the lattice edges where these two qubits reside. We first obtain analytically the average fraction of edges r(p) that the protocol erases from the lattice to correct a fraction p of qubit losses. Then, the threshold pc below which the logical information is protected corresponds to the value of p at which r(p) equals the bond-percolation threshold of the lattice. Moreover, we prove that the logical information is protected if and only if the set of lost qubits does not include the entire support of any logical operator. The results presented here open a route to an analytical understanding of the effects of qubit losses in topological quantum error codes.
Combinatorial optimization models a vast range of industrial processes aiming at improving their efficiency. In general, solving this type of problem exactly is computationally intractable. Therefore, practitioners rely on heuristic solution approaches. Variational quantum algorithms are optimization heuristics that can be demonstrated with available quantum hardware. In this case study, we apply four variational quantum heuristics running on IBM’s superconducting quantum processors to the job shop scheduling problem. Our problem optimizes a steel manufacturing process. A comparison on 5 qubits shows that the recent filtering variational quantum eigensolver (F-VQE) converges faster and samples the global optimum more frequently than the quantum approximate optimization algorithm (QAOA), the standard variational quantum eigensolver (VQE), and variational quantum imaginary time evolution (VarQITE). Furthermore, F-VQE readily solves problem sizes of up to 23 qubits on hardware without error mitigation post processing.
Entanglement is a central concept in quantum information and a key resource for many quantum protocols. In this work we propose and analyze a class of entanglement witnesses that detect the presence of entanglement in subsystems of experimental multi-qubit stabilizer states. The witnesses we propose can be decomposed into sums of Pauli operators and can be efficiently evaluated by either two measurement settings only or at most a number of measurements that only depends on the size of the subsystem of interest. We provide two constructive methods to design the local witness operators, the first one based on the local unitary equivalence between graph and stabilizer states, and the second one based on sufficient and necessary conditions that the respective set of constituent Pauli operators needs to fulfill. We theoretically establish the noise tolerance of the proposed witnesses and benchmark their practical performance by analyzing the local entanglement structure of an experimental seven-qubit quantum error correction code.
Topological quantum error correcting codes have emerged as leading candidates towards the goal of achieving large-scale fault-tolerant quantum computers. However, quantifying entanglement in these systems of large size in the presence of noise is a challenging task. In this paper, we provide two different prescriptions to characterize noisy stabilizer states, including the surface and the color codes, in terms of localizable entanglement over a subset of qubits. In one approach, we exploit appropriately constructed entanglement witness operators to estimate a witness-based lower bound of localizable entanglement, which is directly accessible in experiments. In the other recipe, we use graph states that are local unitary equivalent to the stabilizer state to determine a computable measurement-based lower bound of localizable entanglement. If used experimentally, this translates to a lower bound of localizable entanglement obtained from single-qubit measurements in specific bases to be performed on the qubits outside the subsystem of interest. Towards computing these lower bounds, we discuss in detail the methodology of obtaining a local unitary equivalent graph state from a stabilizer state, which includes a new and scalable geometric recipe as well as an algebraic method that applies to general stabilizer states of arbitrary size. Moreover, as a crucial step of the latter recipe, we develop a scalable graph-transformation algorithm that creates a link between two specific nodes in a graph using a sequence of local complementation operations. We develop open-source Python packages for these transformations, and illustrate the methodology by applying it to a noisy topological color code, and study how the witness and measurement-based lower bounds of localizable entanglement varies with the distance between the chosen qubits.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.