Heisenberg's uncertainty principle implies that if one party (Alice) prepares a system and randomly measures one of two incompatible observables, then another party (Bob) cannot perfectly predict the measurement outcomes. This implication assumes that Bob does not possess an additional system that is entangled to the measured one; indeed, the seminal paper of Einstein, Podolsky, and Rosen (EPR) showed that maximal entanglement allows Bob to perfectly win this guessing game. Although not in contradiction, the observations made by EPR and Heisenberg illustrate two extreme cases of the interplay between entanglement and uncertainty. On the one hand, no entanglement means that Bob's predictions must display some uncertainty. Yet on the other hand, maximal entanglement means that there is no more uncertainty at all. Here we follow an operational approach and give an exact relation-an equality-between the amount of uncertainty as measured by the guessing probability and the amount of entanglement as measured by the recoverable entanglement fidelity. From this equality, we deduce a simple criterion for witnessing bipartite entanglement and an entanglement monogamy equality.
I. UNCERTAINTY RELATIONSHeisenberg's uncertainty principle forms one of the fundamental elements of quantum mechanics. Originally proven for measurements of position and momentum, it is one of the most striking examples of the difference between a quantum and a classical world [1]. Uncertainty relations today are probably best known in the form given by Robertson [2], who extended Heisenberg's result to two arbitrary observables X and Z. More precisely, Robertson's relation states that when measuring the state |ψ using either X or Z, one findswhere Y = ψ|Y 2 |ψ − ψ|Y |ψ 2 for Y ∈ {X,Z} is the standard deviation resulting from measuring |ψ with observable Y . In the modern-day literature, uncertainty is usually measured in terms of entropies (starting with [3][4][5]; see [6] for a survey). One of the reasons this is desirable is that Eq. (1) makes no statement if |ψ happens to give zero expectation on [X,Z] [7]. To see how uncertainty can be quantified in terms of entropies, let us start with a simple example. Throughout, we let Alice (A) denote the system to be measured. For now, let us consider measuring a single qubit in the state ρ A using two incompatible measurements given by the Pauli σ x or σ z eigenbases, and let K be the random variable associated with the measurement outcome. We have from [8] that for any state ρ A ,where H (K| = θ ) = − k p k| =θ log p k| =θ is the Shannon entropy (all logarithms are base 2 in this article) of the probability distribution over measurement outcomes k ∈ {0,1} when we perform the measurement labeled θ on the state ρ A , and each measurement is chosen with probability p θ = 1/2. To see that this is an uncertainty relation, note that if one of the two entropies is zero, then (2) tells us that the other is necessarily nonzero, i.e., there is at least some amount of uncertainty. If we measure a d A -dimensional...