Although universal quantum computers ideally solve problems such as factoring integers exponentially more efficiently than classical machines, the formidable challenges in building such devices motivate the demonstration of simpler, problem-specific algorithms that still promise a quantum speedup. We constructed a quantum boson-sampling machine (QBSM) to sample the output distribution resulting from the nonclassical interference of photons in an integrated photonic circuit, a problem thought to be exponentially hard to solve classically. Unlike universal quantum computation, boson sampling merely requires indistinguishable photons, linear state evolution, and detectors. We benchmarked our QBSM with three and four photons and analyzed sources of sampling inaccuracy. Scaling up to larger devices could offer the first definitive quantum-enhanced computation.
The fundamental problem faced in quantum chemistry is the calculation of molecular properties, which are of practical importance in fields ranging from materials science to biochemistry. Within chemical precision, the total energy of a molecule as well as most other properties, can be calculated by solving the Schrödinger equation. However, the computational resources required to obtain exact solutions on a conventional computer generally increase exponentially with the number of atoms involved 1,2 . This renders such calculations intractable for all but the smallest of systems. Recently, an efficient algorithm has been proposed enabling a quantum computer to overcome this problem by achieving only a polynomial resource scaling with system size 2,3,4 . Such a tool would therefore provide an extremely powerful tool for new science and technology. Here we present a photonic implementation for the smallest problem: obtaining the energies of H 2 , the hydrogen molecule in a minimal basis. We perform a key algorithmic step-the iterative phase estimation algorithm 5,6,7,8 -in full, achieving a high level of precision and robustness to error. We implement other algorithmic steps with assistance from a classical computer and explain how this non-scalable approach could be avoided. Finally, we provide new theoretical results which lay the foundations for the next generation of simulation experiments using quantum computers. We have made early experimental progress towards the long-term goal of exploiting quantum information to speed up quantum chemistry calculations.Experimentalists are just beginning to command the level of control over quantum systems required to explore their information processing capabilities. An important long-term application is to simulate and calculate properties of other many-body quantum systems. Pioneering experiments were first performed using nuclear-magnetic-resonance-based systems to simulate quantum oscillators 9 , leading up to recent simulations of a pairing Hamiltonian 7,10 . Very recently the phase transitions of a two-spin quantum magnet were simulated 11 using an ion-trap system. Here we simulate a quantum chemical system and calculate its energy spectrum, using a photonic system. Molecular energies are represented as the eigenvalues of an associated time-independent HamiltonianĤ and can be efficiently obtained to fixed accuracy, using a quantum algorithm with three distinct steps 6 : encoding a molecular wavefunction into qubits; simulating its time evolution using quantum logic gates; and extracting the approximate energy using the phase estimation algorithm 3,12 . The latter is a general-purpose quantum algorithm for evaluating the eigenvalues of arbitrary Hermitian or unitary operators. The algorithm estimates the phase, φ, accumulated by a molecular eigenstate, |Ψ , under the action of the time-evolution operator,Û =e −iĤt/ , i.e.,where E is the energy eigenvalue of |Ψ . Therefore, estimating the phase for each eigenstate amounts to estimating the eigenvalues of the Hamiltonia...
Entanglement is widely believed to lie at the heart of the advantages offered by a quantum computer. This belief is supported by the discovery that a noiseless (pure) state quantum computer must generate a large amount of entanglement in order to offer any speed up over a classical computer. However, deterministic quantum computation with one pure qubit (DQC1), which employs noisy (mixed) states, is an efficient model that generates at most a marginal amount of entanglement. Although this model cannot implement any arbitrary algorithm it can efficiently solve a range of problems of significant importance to the scientific community. Here we experimentally implement a first-order case of a key DQC1 algorithm and explicitly characterise the non-classical correlations generated. Our results show that while there is no entanglement the algorithm does give rise to other non-classical correlations, which we quantify using the quantum discord-a stronger measure of nonclassical correlations that includes entanglement as a subset. Our results suggest that discord could replace entanglement as a necessary resource for a quantum computational speed-up. Furthermore, DQC1 is far less resource intensive than universal quantum computing and our implementation in a scalable architecture highlights the model as a practical short-term goal.In contrast to the highly pure multi-qubit states required for the conventional models of quantum computing [1, 2], DQC1 employs only a single qubit in a pure state, alongside a register of qubits in the fully mixed state [3]. While this model is strictly less powerful than a universal quantum computer (where one can implement any arbitrary algorithm) it can still efficiently solve important problems that are thought to be classically intractable. The application originally identified was the efficient simulation of some quantum systems [3]. Since then exponential speed-ups have been identified in estimating: the average fidelity decay under quantum maps [4]; quadratically signed weight enumerators [5]; and the Jones Polynomial in knot theory [6]. Recently it has been shown that DQC1 also affords efficient parameter estimation at the quantum metrology limit [7]. Furthermore, attempts to find an efficient way of classically simulating DQC1 have failed [8]. These results provide strong evidence that a device capable of implenting scalable DQC1 algorithms would be an extremely useful tool.Besides the practical applications, DQC1 is also fascinating from a fundamental perspective. For example, it is straightforward to show that a model employing only fully mixed qubits offers no advantage over a classical computer. It is therefore surprising that the addition of only a single pure qubit offers such a dramatic increase in computational power. Furthermore, an important quantum information result is that a pure state quantum computer can only offer an advantage over a classical approach if it generates an amount of entanglement that grows with the size of the problem being tackled [9,10]. This support...
Quantum computation promises to solve fundamental, yet otherwise intractable, problems across a range of active fields of research. Recently, universal quantum logic-gate sets-the elemental building blocks for a quantum computer-have been demonstrated in several physical architectures. A serious obstacle to a full-scale implementation is the large number of these gates required to build even small quantum circuits. Here, we present and demonstrate a general technique that harnesses multi-level information carriers to significantly reduce this number, enabling the construction of key quantum circuits with existing technology. We present implementations of two key quantum circuits: the three-qubit Toffoli gate and the general two-qubit controlled-unitary gate. Although our experiment is carried out in a photonic architecture, the technique is independent of the particular physical encoding of quantum information, and has the potential for wider application.T he realization of a full-scale quantum computer presents one of the most challenging problems facing modern science. Even implementing small-scale quantum algorithms requires a high level of control over multiple quantum systems. Recently, much progress has been made with demonstrations of universal quantum gate sets in a number of physical architectures including ion traps 1,2 , linear optics 3-6 , superconductors 7,8 and atoms 9,10 . In theory, these gates can now be put together to implement any quantum circuit and build a scalable quantum computer. In practice, there are many significant obstacles that will require both theoretical and technological developments to overcome. One is the sheer number of elemental gates required to build quantum logic circuits.Most approaches to quantum computing use qubits-the quantum version of bits. A qubit is a two-level quantum system that can be represented mathematically by a vector in a two-dimensional Hilbert space. Realizing qubits typically requires enforcing a twolevel structure on systems that are naturally far more complex and which have many readily accessible degrees of freedom, such as atoms, ions or photons. Here, we show how harnessing these extra levels during computation significantly reduces the number of elemental gates required to build key quantum circuits. Because the technique is independent of the physical encoding of quantum information and the way in which the elemental gates are themselves constructed, it has the potential to be used in conjunction with existing gate technology in a wide variety of architectures. Our technique extends a recent proposal 11 , and we use it to demonstrate two key quantum logic circuits: the Toffoli and controlled-unitary 12 gates. We first outline the technique in a general context, then present an experimental realization in a linear optic architecture: without our resource-saving technique, linear optic implementations of these gates are infeasible with current technology. Simplifying the Toffoli gateOne of the most important quantum logic gates is the Toffoli 1...
We study the simultaneous estimation of multiple phases as a discretised model for the imaging of a phase object. We identify quantum probe states that provide an enhancement compared to the best quantum scheme for the estimation of each individual phase separately, as well as improvements over classical strategies. Our strategy provides an advantage in the variance of the estimation over individual quantum estimation schemes that scales as O(d), where d is the number of phases. Finally, we study the attainability of this limit using realistic probes and photon-number-resolving detectors. This is a problem in which an intrinsic advantage is derived from the estimation of multiple parameters simultaneously.Introduction-Recent developments in quantum metrology point to a new frontier of parameter estimation in which exploiting quantum states enables higher precision than can be achieved using only classical resources. Much of the work in this field to date has been directed towards the estimation of a single Hamiltonian parameter. This has been explored both theoretically [1][2][3][4][5][6][7][8][9][10][11][12][13] and experimentally, with the estimation of optical phase shifts by means of interferometry providing the dominant paradigm, in the setting of photonic systems as the leading platform [14][15][16][17][18].One of the most important metrology problems to the wider research community is that of microscopy and imaging. Producing a quantum advantage in imaging would be of significant benefit in fields such as biology, particularly for the imaging of samples that are sensitive to the total illumination. Various approaches to quantum imaging have been proposed, typically exploring methods for increasing the diffraction limited resolution of optical imaging systems [19][20][21][22][23][24][25]. A recent classical investigation of quantum enhanced imaging made use of point estimation theory, quantifying differences between images by means of a single parameter [26]. However, imaging is inherently a multi-parameter estimation problem, and deeper insights can be gained by studying it as such.In this Letter, we consider a discretised model for phase imaging based on this approach. Phase imaging is a cornerstone of optical microscopy, typically realised using the related techniques of phase contrast and differential interference contrast imaging [27], that allows differences in refractive index to be detected in otherwise transparent media. So far, the potential for quantum enhancements to these techniques has yet to be explored. Our approach maps phase imaging onto the problem of multiple simultaneous phase estimation.Our results provide a strategy for the estimation of multiple phases using correlated quantum states, in which the multi-parameter nature of the problem leads to an intrinsic benefit when exploiting quantum resources. A surprising outcome of our analysis is that our quantum strategy provides an O(d) advantage, where d is the number of phases, over the optimal quantum individual estimation scheme of usi...
By weakly measuring the polarization of a photon between two strong polarization measurements, we experimentally investigate the correlation between the appearance of anomalous values in quantum weak measurements and the violation of realism and nonintrusiveness of measurements. A quantitative formulation of the latter concept is expressed in terms of a Leggett-Garg inequality for the outcomes of subsequent measurements of an individual quantum system. We experimentally violate the Leggett-Garg inequality for several measurement strengths. Furthermore, we experimentally demonstrate that there is a one-to-one correlation between achieving strange weak values and violating the LeggettGarg inequality.here has been much debate in quantum physics over the question of whether measurable quantities have definite values prior to their measurement. Key ideas addressing this question include the Bell inequality, which considers correlations between measurements on components of a composite system that are space-like separated (1, 2) and contextuality tests, which examine whether identical experiments produce results in different "classically equivalent" contexts (3, 4). A conceptually elegant extension to these ideas is the Leggett-Garg inequality (LGI) (5), which is an inequality constructed from the correlation functions of a series of three consecutive measurements on a single system. Leggett and Garg derive limits based on the joint assumptions of (i) macroscopic realism: An observable for a system will have a definite value at all times; and (ii) noninvasive measurement: It is possible to determine this value with arbitrarily small disturbance on the subsequent evolution of the system. The limits on the value of the inequality derived from these assumptions differ from the predictions of quantum mechanics. Thus the LGI tests the limits of measurement and macroscopic realism.Here we present an experimental test of a generalized LGI using weak measurements (6-9) of the polarization of single photons and measure violations by up to 14 standard deviations. Additionally, we experimentally demonstrate a one-to-one relation (10, 11) between LGI violations and strange weak-valued measurements (6-8), which also arise from the inability to assign values to physical quantities between an earlier and a later measurement.Testing the LGI requires monitoring the system without projecting it into a specific state. For a quantum system a quantum nondemolition (QND) experiment (12-14) would be one way to do this. But QND measurements are not the only way to perform a noninvasive measurement. A generalization of the QND measurement is the so-called weak measurement (6). A weak measurement is one for which it is possible to adjust the strength of the measurement and, in principle, to reduce the back action on the system to an arbitrarily small amount. In other words, a weak measurement is one for which the level of "invasivness" can be controlled.
Shor's powerful quantum algorithm for factoring represents a major challenge in quantum computation. Here, we implement a compiled version in a photonic system. For the first time, we demonstrate the core processes, coherent control, and resultant entangled states required in a full-scale implementation. These are necessary steps on the path towards scalable quantum computing. Our results highlight that the algorithm performance is not the same as that of the underlying quantum circuit and stress the importance of developing techniques for characterizing quantum algorithms.
Quantum mechanics imposes that any amplifier that works independently on the phase of the input signal has to introduce some excess noise. The impossibility of such a noiseless amplifier is rooted into unitarity and linearity of quantum evolution. A possible way to circumvent this limitation is to interrupt such evolution via a measurement, providing a random outcome able to herald a successful -and noiseless -amplification event. Here we show a successful realisation of such an approach; we perform a full characterization of an amplified coherent state using quantum homodyne tomography, and observe a strong heralded amplification, with about 6dB gain and a noise level significantly smaller than the minimal allowed for any ordinary phase-independent device.Quantum optical detection techniques are so advanced that quantum fluctuations are the main source of noise. Therefore, when amplifying optical signals, one has to look at intrinsic limitations of the process: any amplifier cannot work independently on the phase of the input, unless some additional noise is added [1]. The origin of this limitation is that adding extra noise is needed for the output field to obey Heisenberg's uncertainty relation. Also, it is connected to the impossibility of realizing arbitrarily faithful copies of a quantum signal [2,3], and it is thus deeply rooted in the linear and unitary evolution of quantum mechanical systems.Various aspects of this limitation have been studied by using optical parametric amplifiers [4,5,6,7]. For instance, a non-degenerate optical parametric amplifier amplifies all input phases, and introduces the minimal level of added noise, which degrades the signal-to-noise ratio [1]. The same process, driven in the degenerate regime, may provide amplification preserving the signalto-noise ratio. However, this occurs in a phase-dependent fashion: only the part of the signal in phase with the pump light will be amplified, while the part which is 90 degrees out of phase with the pump will be deamplified [4,5].A more intriguing idea is to find a way to tamper with the linear evolution of quantum mechanics; this is actually possible, though non-deterministically, by conditioning our observation upon the result of a measurement [8]. Noiseless amplification can then take place, but only a fraction of the times, and the correct operation is heralded. This strategy is commonly adopted for building effective nonlinearities in linear quantum optical gates [9,10].Here we follow the proposal of Ralph and Lund [11] to demonstrate experimentally that heralded nondeterministic amplification can realise processes which would be impossible for usual amplifiers. Unlike another realisation [12], we have direct access to the output state via state tomography, so we can provide a complete description of the process, and analyse the limitations arising from non-ideal components. Our study is relevant in the long-term view of the integration of amplifiers in quantum communication lines [13].The conceptual layout of the noiseless amplifier...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.