A current trend of research focuses on artificial intelligence based cryptography which although proposed almost thirty years ago could not attract much attention. Abadi and Anderson's work on adversarial cryptography in 2016 rejuvenated the research area which now focuses in building neural networks that are able to learn cryptography using the idea from Generative Adversarial Networks (GANs). In this paper, we survey the most prominent research works that cover neural networks based cryptography from two main periods. The first period covers the oldest models that have been proposed shortly after 2000 and the second period covers the more recent models that have been proposed since 2016. We first discuss the implementation of the systems from the earlier era and the attacks mounted on them. After that, we focus on post 2016 era where more advanced techniques are utilized that rely on GANs in which neural networks compete with each other in order to achieve a goal e.g. learning to encrypt a communication. Finally, we discuss security analysis performed on adversarial cryptography models.INDEX TERMS cryptography, deep learning, neural networks, generative adversarial networks
Neural networks based cryptography has seen a significant growth since the introduction of adversarial cryptography which makes use of Generative Adversarial Networks (GANs) to build neural networks that can learn encryption. The encryption has been proven weak at first but many follow up works have shown that the neural networks can be made to learn the One Time Pad (OTP) and produce perfectly secure ciphertexts. To the best of our knowledge, existing works only considered communications between two or three parties. In this paper, we show how multiple neural networks in an adversarial setup can remotely synchronize and establish a perfectly secure communication in the presence of different attackers eavesdropping their communication. As an application, we show how to build Secret Sharing Scheme based on this perfectly secure multi-party communication. The results show that it takes around 45, 000 training steps for 4 neural networks to synchronize and reach equilibria. When reaching equilibria, all the neural networks are able to communicate between each other and the attackers are not able to break the ciphertexts exchanged between them.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.