Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.
Unpredictability, or randomness, of the outcomes of measurements made on an entangled state can be certified provided that the statistics violate a Bell inequality. In the standard Bell scenario where each party performs a single measurement on its share of the system, only a finite amount of randomness, of at most 4 log 2 d bits, can be certified from a pair of entangled particles of dimension d. Our work shows that this fundamental limitation can be overcome using sequences of (nonprojective) measurements on the same system. More precisely, we prove that one can certify any amount of random bits from a pair of qubits in a pure state as the resource, even if it is arbitrarily weakly entangled. In addition, this certification is achieved by near-maximal violation of a particular Bell inequality for each measurement in the sequence.Introduction.-Bell's theorem [1] has shown that the predictions of quantum mechanics demonstrate non-locality. That is, they cannot be described by a theory in which there are objective properties of a system prior to measurement that satisfy the no-signalling principle (sometimes referred to as "local realism"). Thus, if one requires the no-signalling principle to be satisfied at the operational level then the outcomes of measurements demonstrating non-locality must be unpredictable [1][2][3]. This unpredictability, or randomness, is not the result of ignorance about the system preparation but is intrinsic to the theory.Although the connection between quantum non-locality (via Bell's theorem) and the existence of intrinsic randomness is well known [1][2][3][4] it was analyzed in a quantitative way only recently [5,6]. It was shown how to use nonlocality (probability distributions that violate a Bell inequality) to certify the unpredictability of the outcomes of certain physical processes. This was termed device-independent randomness certification, because the certification only relies on the statistical properties of the outcomes and not on how they were produced. The development of information protocols exploiting this certified form of randomness, such as deviceindependent randomness expansion [5][6][7] and amplification protocols [8,9], followed.Entanglement is a necessary resource for quantum nonlocality, which in turn is required for randomness certification. It is thus crucial to understand qualitatively and quantitatively how these three fundamental quantities relate to one another. In our work, we focus on asking how much certifiable randomness can be obtained from a single entangled state as a resource. Progress has been made in this direction for entangled states shared between two parties, Alice (A) and Bob (B), in the standard scenario where each party makes a single measurement on his share of the system and then discards it. An argument adapted from Ref. [10] shows that either of the two parties, A or B can certify at most 2log 2 d bits of randomness [11], where d is the dimension of the local Hilbert space the state lives in, which in turn implies a bound of 4log 2 d b...
Standard projective measurements (PMs) represent a subset of all possible measurements in quantum physics, defined by positive-operator-valued measures. We study what quantum measurements are projective simulable, that is, can be simulated by using projective measurements and classical randomness. We first prove that every measurement on a given quantum system can be realized by classical randomization of projective measurements on the system plus an ancilla of the same dimension. Then, given a general measurement in dimension two or three, we show that deciding whether it is PM simulable can be solved by means of semidefinite programming. We also establish conditions for the simulation of measurements using projective ones valid for any dimension. As an application of our formalism, we improve the range of visibilities for which two-qubit Werner states do not violate any Bell inequality for all measurements. From an implementation point of view, our work provides bounds on the amount of white noise a measurement tolerates before losing any advantage over projective ones.
By performing local projective measurements on a two-qubit entangled state one can certify in a device-independent way up to one bit of randomness. We show here that general measurements, defined by positive-operator-valued measures, can certify up to two bits of randomness, which is the optimal amount of randomness that can be certified from an entangled bit. General measurements thus provide an advantage over projective ones for device-independent randomness certification.The non-local correlations observed when measuring entangled quantum particles certify the presence of intrinsic randomness in the measurement outputs in a way that is independent on the underlying physical realization of these correlations. While this relation between nonlocality and randomness had been noted by different authors since the seminal work by Bell [1,2], it is only recently that the tools to quantify the intrinsic randomness produced in Bell setups were provided [3][4][5]. These tools were initially introduced in the context of deviceindependent randomness generation [3,[6][7][8], but have also allowed us to obtain a much better understanding of the relation between randomness and Bell violations, two of the most fundamental properties of quantum theory. For instance, today we know that maximal randomness can be certified from arbitrarily small amounts of nonlocality or entanglement [10], or that maximal randomness certification is possible in quantum theory, but not in general theories restricted only by the no-signalling principle [11].Despite all this progress, there are still fundamental questions on the relation between randomness, nonlocality, and entanglement that remain completely unexplored. In this work we consider and solve one of them: we obtain the maximal amount of randomness that can be certified in a standard Bell scenario involving local measurements on one entangled bit or ebit. In order to achieve this maximum the use of general measurement beyond projective ones, often known as PositiveOperator-Valued Measures (POVM), is necessary. Thus, our results and techniques are also interesting because they provide one of the few examples in the context of Bell non-locality where the use of these general measurements provides an advantage over standard projective measurements (other examples can be found in [14,15]).We formulate the relation between randomness and non-locality in the setting of non-local guessing games as considered in [4]. Such games consist of two users, Alice and Bob, and an adversary Eve. Alice and Bob perform local measurements on two separate quantum systems, labelled by A and B. There are m A and m B possible measurements on particles A and B, each producing r A and r B possible results. Measurement choices are labelled by x and y, with x = 1, . . . , m A and y = 1, . . . , m B , while the corresponding results are labelled by a and b, with a = 1, . . . , r A and b = 1, . . . , r B , respectively. The behavior of Alice and Bob's systems is characterized by the finite set of m A × m B × r A × r B...
Bell inequalities have traditionally been used to demonstrate that quantum theory is nonlocal, in the sense that there exist correlations generated from composite quantum states that cannot be explained by means of local hidden variables. With the advent of device-independent quantum information protocols, Bell inequalities have gained an additional role as certificates of relevant quantum properties. In this work, we consider the problem of designing Bell inequalities that are tailored to detect maximally entangled states. We introduce a class of Bell inequalities valid for an arbitrary number of measurements and results, derive analytically their tight classical, nonsignaling, and quantum bounds and prove that the latter is attained by maximally entangled states. Our inequalities can therefore find an application in device-independent protocols requiring maximally entangled states.
The resemblance between the methods used in quantum-many body physics and in machine learning has drawn considerable attention. In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning. Previous results used one-dimensional TNs in image recognition, showing limited scalability and flexibilities. In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multi-scale entanglement renormalization ansatz. This approach introduces mathematical connections among quantum many-body physics, quantum information theory, and machine learning. While keeping the TN unitary in the training phase, TN states are defined, which encode classes of images into quantum many-body states. We study the quantum features of the TN states, including quantum entanglement and fidelity. We find these quantities could be properties that characterize the image classes, as well as the machine learning tasks.
Quantum control is valuable for various quantum technologies such as highfidelity gates for universal quantum computing, adaptive quantum-enhanced metrology, and ultra-cold atom manipulation. Although supervised machine learning and reinforcement learning are widely used for optimizing control parameters in classical systems, quantum control for parameter optimization is mainly pursued via gradient-based greedy algorithms. Although the quantum fitness landscape is often compatible with greedy algorithms, sometimes arXiv:1607.03428v3 [cs.LG]
The identification of phases of matter is a challenging task, especially in quantum mechanics, where the complexity of the ground state appears to grow exponentially with the size of the system. Traditionally, physicists have to identify the relevant order parameters for the classification of the different phases. We here follow a radically different approach: we address this problem with a state-of-the-art deep learning technique, adversarial domain adaptation. We derive the phase diagram of the whole parameter space starting from a fixed and known subspace using unsupervised learning. This method has the advantage that the input of the algorithm can be directly the ground state without any ad-hoc feature engineering. Furthermore, the dimension of the parameter space is unrestricted. More specifically, the input data set contains both labelled and unlabelled data instances. The first kind is a system that admits an accurate analytical or numerical solution, and one can recover its phase diagram. The second type is the physical system with an unknown phase diagram. Adversarial domain adaptation uses both types of data to create invariant feature extracting layers in a deep learning architecture. Once these layers are trained, we can attach an unsupervised learner to the network to find phase transitions. We show the success of this technique by applying it on several paradigmatic models: the Ising model with different temperatures, the Bose-Hubbard model, and the Su-Schrieffer-Heeger model with disorder. The method finds unknown transitions successfully and predicts transition points in close agreement with standard methods. This study opens the door to the classification of physical systems where the phases boundaries are complex such as the many-body localization problem or the Bose glass phase.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.