No abstract
The widespread adoption of Deep Neural Networks (DNNs) in important domains raises questions about the trustworthiness of DNN outputs. Even a highly accurate DNN will make mistakes some of the time, and in settings like self-driving vehicles these mistakes must be quickly detected and properly dealt with in deployment.Just as our community has developed effective techniques and mechanisms to monitor and check programmed components, we believe it is now necessary to do the same for DNNs. In this paper we present DNN self-checking as a process by which internal DNN layer features are used to check DNN predictions. We detail SelfChecker, a self-checking system that monitors DNN outputs and triggers an alarm if the internal layer features of the model are inconsistent with the final prediction. SelfChecker also provides advice in the form of an alternative prediction.We evaluated SelfChecker on four popular image datasets and three DNN models and found that SelfChecker triggers correct alarms on 60.56% of wrong DNN predictions, and false alarms on 2.04% of correct DNN predictions. This is a substantial improvement over prior work (SELFORACLE, DISSECTOR, and ConfidNet). In experiments with self-driving car scenarios, SelfChecker triggers more correct alarms than SELFORACLE for two DNN models (DAVE-2 and Chauffeur) with comparable false alarms. Our implementation is available as open source.
Since its introduction in 2003, the influence maximization (IM) problem has drawn significant research attention in the literature. The aim of IM is to select a set of k users who can influence the most individuals in the social network. The problem is proven to be NP-hard. A large number of approximate algorithms have been proposed to address this problem. The state-of-the-art algorithms estimate the expected influence of nodes based on sampled diffusion paths. As the number of required samples have been recently proven to be lower bounded by a particular threshold that presets tradeoff between the accuracy and efficiency, the result quality of these traditional solutions is hard to be further improved without sacrificing efficiency. In this paper, we present an orthogonal and novel paradigm to address the IM problem by leveraging deep learning models to estimate the expected influence. Specifically, we present a novel framework called DISCO that incorporates network embedding and deep reinforcement learning techniques to address this problem. Experimental study on real-world networks demonstrates that DISCO achieves the best performance w.r.t efficiency and influence spread quality compared to state-of-the-art classical solutions. Besides, we also show that the learning model exhibits good generality.
Deep Neural Networks (DNNs) have been widely adopted, yet DNN models are surprisingly unreliable, which raises significant concerns about their use in critical domains. In this work, we propose that runtime DNN mistakes can be quickly detected and properly dealt with in deployment, especially in settings like self-driving vehicles. Just as software engineering (SE) community has developed effective mechanisms and techniques to monitor and check programmed components, our previous work, SelfChecker, is designed to monitor and correct DNN predictions given unintended abnormal test data. SelfChecker triggers an alarm if the decisions given by the internal layer features of the model are inconsistent with the final prediction and provides advice in the form of an alternative prediction. In this paper, we extend SelfChecker to the security domain. Specifically, we describe SelfChecker++, which we designed to target both unintended abnormal test data and intended adversarial samples. Technically, we develop a technique which can transform any runtime inputs triggering alarms into semantically equivalent inputs, then we feed those transformed inputs to the model. Such runtime transformation nullifies any intended crafted samples, making the model immune to adversarial attacks that craft adversarial samples. We evaluated the alarm accuracy of SelfChecker++ on three DNN models and four popular image datasets, and found that SelfChecker++ triggers correct alarms on 63.10% of wrong DNN predictions, and triggers false alarms on 5.77% of correct DNN predictions. We also evaluated the effectiveness of SelfChecker++ in detecting adversarial examples and found it detects on average 70.09% of such samples with advice accuracy that is 20.89% higher than the original DNN model and 18.37% higher than SelfChecker.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.