With the rising popularity of machine learning and the ever increasing demand for computational power, there is a growing need for hardware optimized implementations of neural networks and other machine learning models. As the technology evolves, it is also plausible that machine learning or artificial intelligence will soon become consumer electronic products and military equipment, in the form of well-trained models. Unfortunately, the modern fabless business model of manufacturing hardware, while economic, leads to deficiencies in security through the supply chain. In this paper, we illuminate these security issues by introducing hardware Trojan attacks on neural networks, expanding the current taxonomy of neural network security to incorporate attacks of this nature. To aid in this, we develop a novel framework for inserting malicious hardware Trojans in the implementation of a neural network classifier. We evaluate the capabilities of the adversary in this setting by implementing the attack algorithm on convolutional neural networks while controlling a variety of parameters available to the adversary. Our experimental results show that the proposed algorithm could effectively classify a selected input trigger as a specified class on the MNIST dataset by injecting hardware Trojans into 0.03%, on average, of neurons in the 5th hidden layer of arbitrary 7-layer convolutional neural networks, while undetectable under the test data. Finally, we discuss the potential defenses to protect neural networks against hardware Trojan attacks.
Recent advance in artificial intelligence and the increasing need for powerful defensive measures in the domain of network security, have lead to the adoption of deep learning approaches for use in network intrusion detection systems (NIDS). These methods have achieved superior performance against conventional network attacks, which enable the deployment of practical security systems to unique and dynamic sectors. Adversarial machine learning, unfortunately, has recent shown that deep learning models are inherently vulnerable to adversarial modifications on their input data. Because of this susceptibility, the deep learning models deployed to power a network defense could in fact be the weakest entry point for compromising a network system. In this paper, we show that by modifying on average as little as 1.38 of input features, an adversary can generate malicious inputs which effectively fool a deep learning based NIDS. Therefor, when designing such systems, it is crucial to consider the performance from not only the conventional network security perspective but also the adversarial machine learning domain.
The historical spread of Spanish and Portuguese throughout the world provides a rich source of data for linguists studying how languages evolve and change. This volume analyses the development of Portuguese and Spanish from Latin and their subsequent transformation into several non-standard varieties. These varieties include Portuguese- and Spanish-based creoles, Bozal Spanish and Chinese Coolie Spanish in Cuba, Chinese Immigrant Spanish, Andean Spanish, and Barranquenho, a Portuguese variety on the Portugal-Spain border. Clancy Clements demonstrates that grammar formation not only takes place in parent-to-child communication, but also, importantly, in adult-to-adult communication. He argues that cultural identity is also an important factor in language formation and maintenance, especially in the cases of Portuguese, Castilian, and Barranquenho. More generally, the contact varieties of Portuguese and Spanish have been shaped by demographics, by prestige, as well as by linguistic input, general cognitive abilities and limitations, and by the dynamics of speech community.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.