This paper introduces a novel adversarial example generation method against face recognition systems (FRSs). An adversarial example (AX) is an image with deliberately crafted noise to cause incorrect predictions by a target system. The AXs generated from our method remain robust under realworld brightness changes. Our method performs nonlinear brightness transformations while leveraging the concept of curriculum learning during the attack generation procedure. We demonstrate that our method outperforms conventional techniques from comprehensive experimental investigations in the digital and physical world. Furthermore, this method enables practical risk assessment of FRSs against brightness agnostic AXs.
Although cyberattacks on machine learning (ML) production systems can be harmful, today, security practitioners are ill equipped, lacking methodologies and tactical tools that would allow them to analyze the security risks of their ML-based systems. In this paper, we perform a comprehensive threat analysis of ML production systems. In this analysis, we follow the ontology presented by NIST for evaluating enterprise network security risk and apply it to ML-based production systems. Specifically, we (1) enumerate the assets of a typical ML production system, (2) describe the threat model (i.e., potential adversaries, their capabilities, and their main goal), (3) identify the various threats to ML systems, and (4) review a large number of attacks, demonstrated in previous studies, which can realize these threats. To quantify the risk posed by adversarial machine learning (AML) threat, we introduce a novel scoring system that assigns a severity score to different AML attacks. The proposed scoring system utilizes the analytic hierarchy process (AHP) for ranking, with the assistance of security experts, various attributes of the attacks. Finally, we developed an extension to the MulVAL attack graph generation and analysis framework to incorporate cyberattacks on ML production systems. Using this extension, security practitioners can apply attack graph analysis methods in environments that include ML components thus providing security practitioners with a methodological and practical tool for both evaluating the impact and quantifying the risk of a cyberattack targeting ML production systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.