Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasiimperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
A state on a tripartite quantum system A ⊗ B ⊗ C forms a Markov chain if it can be reconstructed from its marginal on A ⊗ B by a quantum operation from B to B ⊗ C. We show that the quantum conditional mutual information I (A : C|B) of an arbitrary state is an upper bound on its distance to the closest reconstructed state. It thus quantifies how well the Markov chain property is approximated.
Device-independent cryptography goes beyond conventional quantum cryptography by providing security that holds independently of the quality of the underlying physical devices. Device-independent protocols are based on the quantum phenomena of non-locality and the violation of Bell inequalities. This high level of security could so far only be established under conditions which are not achievable experimentally. Here we present a property of entropy, termed “entropy accumulation”, which asserts that the total amount of entropy of a large system is the sum of its parts. We use this property to prove the security of cryptographic protocols, including device-independent quantum key distribution, while achieving essentially optimal parameters. Recent experimental progress, which enabled loophole-free Bell tests, suggests that the achieved parameters are technologically accessible. Our work hence provides the theoretical groundwork for experimental demonstrations of device-independent cryptography.
The goal of this paper is to analyze an intriguing phenomenon recently discovered in deep networks, namely their instability to adversarial perturbations (Szegedy et al, 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on two practical classes of classifiers, namely the linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured mathematically by the distinguishability measure). Moreover, we further show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to √ d (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed in Szegedy et al (2014) in the context of neural networks. To the best of our knowledge, our results provide the first theoretical work that addresses the phenomenon of adversarial instability recently observed for deep networks. We finally show experimental results on controlled and real-world data that confirm the theoretical analysis and extends its spirit to more complex classification schemes.
No abstract
Decoupling has become a central concept in quantum information theory with applications including proving coding theorems, randomness extraction and the study of conditions for reaching thermal equilibrium. However, our understanding of the dynamics that lead to decoupling is limited. In fact, the only families of transformations that are known to lead to decoupling are (approximate) unitary two-designs, i.e., measures over the unitary group which behave like the Haar measure as far as the first two moments are concerned. Such families include for example random quantum circuits with O(n 2 ) gates, where n is the number of qubits in the system under consideration. In fact, all known constructions of decoupling circuits use Ω(n 2 ) gates. Here, we prove that random quantum circuits with O(n log 2 n) gates satisfy an essentially optimal decoupling theorem. In addition, these circuits can be implemented in depth O(log 3 n). This proves that decoupling can happen in a time that scales polylogarithmically in the number of particles in the system, provided all the particles are allowed to interact. Our proof does not proceed by showing that such circuits are approximate two-designs in the usual sense, but rather we directly analyze the decoupling property.
We ask the question whether entropy accumulates, in the sense that the operationally relevant total uncertainty about an n-partite system $$A = (A_1, \ldots A_n)$$ A = ( A 1 , … A n ) corresponds to the sum of the entropies of its parts $$A_i$$ A i . The Asymptotic Equipartition Property implies that this is indeed the case to first order in n—under the assumption that the parts $$A_i$$ A i are identical and independent of each other. Here we show that entropy accumulation occurs more generally, i.e., without an independence assumption, provided one quantifies the uncertainty about the individual systems $$A_i$$ A i by the von Neumann entropy of suitably chosen conditional states. The analysis of a large system can hence be reduced to the study of its parts. This is relevant for applications. In device-independent cryptography, for instance, the approach yields essentially optimal security bounds valid for general attacks, as shown by Arnon-Friedman et al. (SIAM J Comput 48(1):181–225, 2019).
The entropy accumulation theorem [1] states that the smooth min-entropy of an n-partite system A = (A1, . . . , An) is lower-bounded by the sum of the von Neumann entropies of suitably chosen conditional states up to corrections that are sublinear in n. This theorem is particularly suited to proving the security of quantum cryptographic protocols, and in particular so-called device-independent protocols for randomness expansion and key distribution, where the devices can be built and preprogrammed by a malicious supplier [2]. However, while the bounds provided by this theorem are optimal in the first order, the second-order term is bounded more crudely, in such a way that the bounds deteriorate significantly when the theorem is applied directly to protocols where parameter estimation is done by sampling a small fraction of the positions, as is done in most QKD protocols. The objective of this paper is to improve this second-order sublinear term and remedy this problem. On the way, we prove various bounds on the divergence variance, which might be of independent interest.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.