Extensive deployment of ubiquitous computing devices brings wide range of privacy and security issues in the low-resource domain. Various lightweight algorithms are proposed to solve security problem for these resource-constrained environments. In this work, optimised hardware implementations of lightweight block cipher QTL are proposed in order to provide security with optimum resource utilisation. In proposed reduced datapath architecture, resource utilisation is reduced and it gives good trade-off between area and performance. In proposed pipelined architecture, encryption round is divided into two sub-stages. This design methodology significantly improves the operating frequency. As a result, this design is apt for high-speed applications. Moreover, the proposed unified architecture combines three key scheduling designs into single design for QTL encryption and provides flexible security. All three architectures are extensively evaluated and compared on the basis of performance, area utilisation, energy requirement and power consumption for their implementations in different FPGA platforms.
<p>Efficiently securing and compressing neural network models is a problem of significant interest due to its high popularity in various machine learning and computer vision applications such as industrial automation, autonomous vehicles, surveillance, and medical imaging. Such technologies are necessary while running machine learning models in both resource-constrained edge devices as well as cloud-based servers. These models embody valuable intellectual property that must be protected. Traditional encryption ciphers can provide high security guarantees in order to secure the model, but their sizes are prohibitive for resource-constrained devices. In this paper, we present a simultaneous compression and encryption approach for deep learning models, where the model weights are encrypted using chaotic maps. We claim that employing multiple chaotic maps and a lossless compression method can help us create not only an efficient encryption scheme but also compress the models efficiently in a hardware-friendly manner. This reduces the model storage overheads by 1.51× as compared to the nearest computing work. Additionally, our method is 70% faster and provides much better security guarantees</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.