The study of quantum thermal machines, and more generally of open quantum systems, often relies on master equations. Two approaches are mainly followed. On the one hand, there is the widely used, but often criticized, local approach, where machine sub-systems locally couple to thermal baths. On the other hand, in the more established global approach, thermal baths couple to global degrees of freedom of the machine. There has been debate as to which of these two conceptually different approaches should be used in situations out of thermal equilibrium. Here we compare the local and global approaches against an exact solution for a particular class of thermal machines. We consider thermodynamically relevant observables, such as heat currents, as well as the quantum state of the machine. Our results show that the use of a local master equation is generally well justified. In particular, for weak inter-system coupling, the local approach agrees with the exact solution, whereas the global approach fails for non-equilibrium situations. For intermediate coupling, the local and the global approach both agree with the exact solution and for strong coupling, the global approach is preferable. These results are backed by detailed derivations of the regimes of validity for the respective approaches.or more concretely of an electrical current [34,[42][43][44], the refrigeration of a quantum degree of freedom [45][46][47][48][49][50], the creation of entanglement [51][52][53], the determination of low temperatures [54], or the design of thermal transistors [55] and autonomous quantum clocks [56].The standard description of these systems crucially relies on Markovian master equations to predict the relevant observables, such as heat currents and power. Two main approaches are followed in the literature. The first is a local approach, where the thermal baths couple locally to sub-systems of the machine. The second is a global approach, where thermal baths couple to the global eigenmodes of the machine. As the two approaches are conceptually different, there has been considerable debate about which one should be used in order to accurately describe thermal machines, and more generally out-of-equilibrium systems. Since the global approach describes equilibrium situations accurately (see below), while the local in some cases does not, there has been incentive to use the global approach out of equilibrium as well. Furthermore, the local approach is often believed to be more phenomenological in nature [13,14,19,28,57] and it was even argued that it is unphysical in certain regimes [27,58,59].The goal of the present work is to discuss these questions in depth. We will consider a system for which the full unitary dynamics of the machine and the thermal baths can be solved exactly. This allows us to evaluate the performance of local and global master equations for the machine against the exact dynamics. In addition, we give detailed derivations of the local and the global approaches and discuss the involved approximations. Specifically, ...
Parameter estimation is of fundamental importance in areas from atomic spectroscopy and atomic clocks to gravitational wave-detection. Entangled probes provide a significant precision gain over classical strategies in the absence of noise. However, recent results seem to indicate that any small amount of realistic noise restricts the advantage of quantum strategies to an improvement by at most a multiplicative constant. Here we identify a relevant scenario in which one can overcome this restriction and attain super-classical precision scaling even in the presence of uncorrelated noise. We show that precision can be significantly enhanced when the noise is concentrated along some spatial direction, while the Hamiltonian governing the evolution which depends on the parameter to be estimated can be engineered to point along a different direction. In the case of perpendicular orientation, we find super-classical scaling and identify a state which achieves the optimum.Estimation of an unknown parameter is essential across disciplines from atomic spectroscopy and clocks [1][2][3] to gravitational wave-detection [4]. It is typically achieved by letting a probe, e.g. light, interact with the system under investigation, picking up information about the desired parameter. As seen in Fig. 1, a metrology protocol can be understood in four main steps [5,6]: i) preparation of the probe, ii) interaction with the system, iii) readout of the probe, and iv) construction of an estimate of the unknown parameter from the results. Steps (i)-(iii) may be repeated many times before the final construction of the estimate. FIG. 1.General metrology protocol where a known probe state evolves according to a physical evolution depending on an unknown parameter ω. After sufficient amount of data is collected an estimate for the parameter is constructed.The estimate uncertainty will depend on the available resources, here the probe size N and the total time T available for the experiment (other choices are possible [7]). By the central limit theorem, for N uncorrelated particles, the best uncertainty scales as 1/ √ νN , where ν = T /t is the number of evolve-and-measure rounds. This bound is known as the shot-noise or standard quantum limit (SQL). By making use of quantum phenomena, a metrology protocol may surpass the SQL, reaching instead the limits imposed by the quantum uncertainty relations. For probes of non-interacting particles, the best possible scaling compatible with these relations is 1/( √ νN ), known as the Heisenberg limit. Without noise, the Heisenberg limit can be attained using entangled input states, e.g. Greenberger-HorneZeilinger (GHZ) states for atomic spectroscopy [8]. In the presence of noise however, the picture is much less clear, as the optimal strategy depends strongly on the model of decoherence considered. Nevertheless, the SQL has been significantly surpassed in experiments of optical magnetometry [9,10], which proved that some sources of noise can be effectively counterbalanced [11,12]. However, unless one can k...
In recent years, the use of information principles to understand quantum correlations has been very successful. Unfortunately, all principles considered so far have a bipartite formulation, but intrinsically multipartite principles, yet to be discovered, are necessary for reproducing quantum correlations. Here we introduce local orthogonality, an intrinsically multipartite principle stating that events involving different outcomes of the same local measurement must be exclusive or orthogonal. We prove that it is equivalent to no-signalling in the bipartite scenario but more restrictive for more than two parties. By exploiting this nonequivalence, it is then demonstrated that some bipartite supra-quantum correlations do violate the local orthogonality when distributed among several parties. Finally, we show how its multipartite character allows revealing the non-quantumness of correlations for which any bipartite principle fails. We believe that local orthogonality is a crucial ingredient for understanding no-signalling and quantum correlations.
Bell's Theorem shows that quantum mechanical correlations can violate the constraints that the causal structure of certain experiments impose on any classical explanation. It is thus natural to ask to which degree the causal assumptions -e.g. "locality" or "measurement independence" -have to be relaxed in order to allow for a classical description of such experiments. Here, we develop a conceptual and computational framework for treating this problem. We employ the language of Bayesian networks to systematically construct alternative causal structures and bound the degree of relaxation using quantitative measures that originate from the mathematical theory of causality. The main technical insight is that the resulting problems can often be expressed as computationally tractable linear programs. We demonstrate the versatility of the framework by applying it to a variety of scenarios, ranging from relaxations of the measurement independence, locality and bilocality assumptions, to a novel causal interpretation of CHSH inequality violations.The paradigmatic Bell experiment [1] involves two distant observers, each with the capability to perform one of two possible experiments on their shares of a joint system. Bell observed that even absent of any detailed information about the physical processes involved, the causal structure of the setup alone implies strong constraints on the correlations that can arise from any classical description [2]. The physically wellmotivated causal assumptions are: (i) measurement independence: experimenters can choose which property of a system to measure, independently of how the system has been prepared; (ii) locality: the results obtained by one observer cannot be influenced by any action of the other (ideally space-like separated) experimenter. The resulting constraints are Bell's inequalities [1]. Quantum mechanical processes subject to the same causal structure can violate these constraints -a prediction that has been abundantly verified experimentally [3][4][5][6][7]. This effect is commonly referred to as quantum nonlocality.It is now natural to ask how stable the effect of quantum non-locality is with respect to relaxations of the causal assumptions. Which "degree of measurement dependence", e.g., is required to reconcile empirically observed correlations with a classical and local model? Such questions are not only, we feel, of great relevance to foundational questions -they are also of interest to practical applications of non-locality, e.g. in cryptographic protocols. Indeed, eavesdroppers can (and do [8]) exploit the failure of a given cryptographic device to be constrained by the presumed causal structure to compromise its security. At the same time, it will often be difficult to ascertain that causal assumptions hold exactly -which makes it important to develop a systematic quantitative theory.Several variants of this question have recently attracted considerable attention [9][10][11][12][13][14][15][16][17][18][19][20]. For example, measurement dependence has been found ...
The generation of random numbers is a task of paramount importance in modern science. A central problem for both classical and quantum randomness generation is to estimate the entropy of the data generated by a given device. Here we present a protocol for self-testing quantum random number generation, in which the user can monitor the entropy in real time. Based on a few general assumptions, our protocol guarantees continuous generation of high quality randomness, without the need for a detailed characterization of the devices. Using a fully optical setup, we implement our protocol and illustrate its self-testing capacity. Our work thus provides a practical approach to quantum randomness generation in a scenario of trusted but error-prone devices.
An approach to quantum random number generation based on unambiguous quantum state discrimination is developed. We consider a prepare-and-measure protocol, where two nonorthogonal quantum states can be prepared, and a measurement device aims at unambiguously discriminating between them. Because the states are nonorthogonal, this necessarily leads to a minimal rate of inconclusive events whose occurrence must be genuinely random and which provide the randomness source that we exploit. Our protocol is semi-device-independent in the sense that the output entropy can be lower bounded based on experimental data and a few general assumptions about the setup alone. It is also practically relevant, which we demonstrate by realizing a simple optical implementation, achieving rates of 16.5 Mbits=s. Combining ease of implementation, a high rate, and a real-time entropy estimation, our protocol represents a promising approach intermediate between fully device-independent protocols and commercial quantum random number generators.
We propose the use of a quantum thermal machine for low-temperature thermometry. A hot thermal reservoir coupled to the machine allows for simultaneously cooling the sample while determining its temperature without knowing the model-dependent coupling constants. In its most simple form, the proposed scheme works for all thermal machines that perform at Otto efficiency and can reach Carnot efficiency. We consider a circuit QED implementation that allows for precise thermometry down to ∼15 mK with realistic parameters. Based on the quantum Fisher information, this is close to the optimal achievable performance. This implementation demonstrates that our proposal is particularly promising in systems where thermalization between different components of an experimental setup cannot be guaranteed.
An implementation of a small quantum absorption refrigerator in a circuit QED architecture is proposed. The setup consists of three harmonic oscillators coupled to a Josephson junction. The refrigerator is autonomous in the sense that it does not require any external control for cooling, but only thermal contact between the oscillators and heat baths at different temperatures. In addition, the setup features a built-in switch, which allows the cooling to be turned on and off. If timing control is available, this enables the possibility for coherence-enhanced cooling. Finally, we show that significant cooling can be achieved with experimentally realistic parameters and that our setup should be within reach of current technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.