Adversarial machine learning is an emerging threat to security of Machine Learning (ML)-based systems. However, we can potentially use it as a weapon against ML-based attacks. In this paper, we focus on protecting Physical Unclonable Functions (PUFs) against ML-based modeling attacks. PUFs are an important cryptographic primitive for secret key generation and challenge-response authentication. However, none of the existing PUF constructions are both ML attack resistant and sufficiently lightweight to fit low-end embedded devices. We present a lightweight PUF construction, CRC-PUF, in which input challenges are de-synchronized from output responses to make a PUF model difficult to learn. The de-synchronization is done by an input transformation based on a Cyclic Redundancy Check (CRC). By changing the CRC generator polynomial for each new response, we assure that success probability of recovering the transformed challenge is at most 2 −86 for 128-bit challenges and responses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.