It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, largescale universal quantum computers are yet to be built. Boson sampling 1 is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy 2-6 . Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling 7,8 . Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.It is believed that new types of computing machines will be constructed to exploit quantum mechanics for an exponential speed advantage in solving certain problems compared with classical computers 9 . Recent large state and private investments in developing quantum technologies have increased interest in this challenge. However, it is not yet experimentally proven that a large computationally useful quantum system can be assembled, and such a task is highly non-trivial given the challenge of overcoming the effects of errors in these systems.Boson sampling is a simple task which is native to linear optics and has captured the imagination of quantum scientists because it seems possible that the anticipated supremacy of quantum machines could be demonstrated by a near-term experiment. The advent of integrated quantum photonics 10 has enabled large, complex, stable and programmable optical circuitry 11,12 , while recent advances in photon generation [13][14][15] and detection 16,17 have also been impressive. The possibility to generate many photons, evolve them under a large linear optical unitary transformation, then detect them, seems feasible, so the role of a boson sampling machine as a rudimentary but legitimate computing device is particularly appealing. Compared to a universal digital quantum computer, the resources required for experimental boson sampling appear much less demanding. This approach of designing quantum algorithms to demonstrate computational supremacy with nearterm experimental capabilities has inspired a raft of proposals suited to different hardware platforms [18]...
We introduce fusion-based quantum computing (FBQC) -a model of universal quantum computation in which entangling measurements, called fusions, are performed on the qubits of small constant-sized entangled resource states. We introduce a stabilizer formalism for analyzing fault tolerance and computation in these schemes. This framework naturally captures the error structure that arises in certain physical systems for quantum computing, such as photonics. FBQC can offer significant architectural simplifications, enabling hardware made up of many identical modules, requiring an extremely low depth of operations on each physical qubit and reducing classical processing requirements. We present two pedagogical examples of fault-tolerant schemes constructed in this framework and numerically evaluate their threshold under a hardware agnostic fusion error model including both erasure and Pauli error. We also study an error model of linear optical quantum computing with probabilistic fusion and photon loss. In FBQC the non-determinism of fusion is directly dealt with by the quantum error correction protocol, along with other errors. We find that tailoring the fault-tolerance framework to the physical system allows the scheme to have a higher threshold than schemes reported in literature. We present a ballistic scheme which can tolerate a 10.4% probability of suffering photon loss in each fusion.
Engineering apparatus that harness quantum theory promises to offer practical advantages over current technology. A fundamentally more powerful prospect is that such quantum technologies could out-perform any future iteration of their classical counterparts, no matter how well the attributes of those classical strategies can be improved. Here, for optical direct absorption measurement, we experimentally demonstrate such an instance of an absolute advantage per photon probe that is exposed to the absorbative sample. We use correlated intensity measurements of spontaneous parametric downconversion using a commercially available air-cooled CCD, a new estimator for data analysis and a high heralding efficiency photon-pair source. We show this enables improvement in the precision of measurement, per photon probe, beyond what is achievable with an ideal coherent state (a perfect laser) detected with 100% efficient and noiseless detection. We see this absolute improvement for up to 50% absorption, with a maximum observed factor of improvement of 1.46. This equates to around 32% reduction in the total number of photons traversing an optical sample, compared to any future direct optical absorption measurement using classical light.
Harnessing the unique properties of quantum mechanics offers the possibility to deliver new technologies that can fundamentally outperform their classical counterparts. These technologies only deliver advantages when components operate with performance beyond specific thresholds. For optical quantum metrology, the biggest challenge that impacts on performance thresholds is optical loss. Here we demonstrate how including an optical delay and an optical switch in a feed-forward configuration with a stable and efficient correlated photon pair source reduces the detector efficiency required to enable quantum enhanced sensing down to the detection level of single photons. When the switch is active, we observe a factor of improvement in precision of 1.27 for transmission measurement on a per input photon basis, compared to the performance of a laser emitting an ideal coherent state and measured with the same detection efficiency as our setup. When the switch is inoperative, we observe no quantum advantage.Quantum mechanics quantifies the highest precision that is achievable in each type of optical measurement [1][2][3]. Single photon probes measured with single photon detectors are in principle optimal for gaining the most precision per-unit intensity when measuring optical transmission [4]. However, in practice, optical loss and low component efficiencies prevent an advantage from being achieved using single photon detectors [5]. One way to reduce the impact of lower component efficiency is to incorporate fast optical switching and an optical delay with schemes that are based on heralded generation of quantum sates [6]. This then enables use of a quantum state conditioned on the successful detection of a correlated signal -this is referred to as feed-forward.Feed-forward is key for demonstrations of optical quantum computing [7], it has been used in experiments that increase the generation rate [8][9][10][11][12] and signal-to-noise ratio [13] of heralded single photons, it has been used to calibrate single photon detectors [14] and it has also been applied to gather evidence of single photon sensitivity in animal vision [15]. Jakeman and Rarity proposed in Ref.[6] using feed-forward with correlated photon pairs to enable sub shot noise optical transmission measurements when component efficiency is otherwise not sufficient to permit a quantum advantage in passive direct detection [16][17][18]. But despite becoming identified as key to more general multi-photon entangled quantum state engineering for quantum metrology [19,20], feed-forward has not been implemented for quantum enhanced parameter estimation. Here we implement the proposal featured in Ref.[6] (Fig. 1) to realise sub shot noise measurement of transmissitivity, using single photon detectors that are too low in efficiency to enable sub shot noise performance in a passive measurement.The transmissivity η of a sample is in general estimated by measuring the reduction of light intensity from a known mean input valueN in , to a reduced mean valueN out according ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.