Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in quickly adapting to the surrounding environments. Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications (e.g., smart grids, traffic controls, and autonomous vehicles) unless its vulnerabilities are addressed and mitigated. Thus, this paper provides a comprehensive survey that discusses emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks. We first cover some fundamental backgrounds about DRL and present emerging adversarial attacks on machine learning techniques. We then investigate more details of the vulnerabilities that the adversary can exploit to attack DRL along with the state-of-the-art countermeasures to prevent such attacks. Finally, we highlight open issues and research challenges for developing solutions to deal with attacks for DRL-based intelligent systems.
In recent years, advancements in machine learning (ML) techniques, in particular, deep learning (DL) methods have gained a lot of momentum in solving inverse imaging problems, often surpassing the performance provided by hand-crafted approaches. Traditionally, analytical methods have been used to solve inverse imaging problems such as image restoration, inpainting, and superresolution. Unlike analytical methods for which the problem is explicitly defined and the domain knowledge is carefully engineered into the solution, DL models do not benefit from such prior knowledge and instead make use of large datasets to predict an unknown solution to the inverse problem. Recently, a new paradigm of training deep models using a single image, named untrained neural network prior (UNNP) has been proposed to solve a variety of inverse tasks, e.g., restoration and inpainting. Since then, many researchers have proposed various applications and variants of UNNP. In this paper, we present a comprehensive review of such studies and various UNNP applications for different tasks and highlight various open research problems which require further research.
In response to various privacy risks, researchers and practitioners have been exploring different paradigms that can leverage the
increased computational capabilities of consumer devices to train machine (ML) learning models in a distributed fashion without
requiring the uploading of the training data from individual devices to central facilities. For this purpose, federated learning (FL)
was proposed as a technique that can learn a global machine model at a central master node by the aggregation of models trained
locally using private data. However, organizations may be reluctant to train models locally and to share these local ML models
due to required computational resources for model training at their end and due to privacy risks that may result from adversaries
inverting these models to infer information about the private training data. Incentive mechanisms have been proposed to motivate
end users to participate in collaborative training of ML models (using their local data) in return for certain rewards. However, the
design of an optimal incentive mechanism for FL is challenging due to its distributed nature and the fact that the central server
has no access to clients’ hyperparameters information and the amount/quality data used for training, which makes the task of
determining the reward based on the contribution of individual clients in FL environment difficult. Even though several incentive
mechanisms have been proposed for FL, a thorough up-to-date systematic review is missing and this paper fills this gap. According
to the best of our knowledge, this paper is the first systematic review that comprehensively enlists the design principles required for
implementing these incentive mechanisms and then categorizes various incentive mechanisms according to their design principles.
In addition, we also provide a comprehensive overview of security challenges associated with incentive-driven FL. Finally, we
highlight the limitations and pitfalls of these incentive schemes and elaborate upon open-research issues that required further
research attention.
Spurred by the recent advances in deep learning to harness rich information hidden in large volumes of data and to tackle problems that are hard to model/solve (e.g., resource allocation problems), there is currently tremendous excitement in the mobile networks domain around the transformative potential of data-driven AI/ML based network automation, control and analytics for 5G and beyond. In this article, we present a cautionary perspective on the use of AI/ML in the 5G context by highlighting the adversarial dimension spanning multiple types of ML (supervised/unsupervised/RL) and support this through three case studies. We also discuss approaches to mitigate this adversarial ML risk, offer guidelines for evaluating the robustness of ML models, and call attention to issues surrounding ML oriented research in 5G more generally.
The anticipated increase in the count of IoT devices in the coming years motivates the development of efficient algorithms that can help in their effective management while keeping the power consumption low. In this paper, we propose LoRaDRL and provide a detailed performance evaluation. We propose a multi-channel scheme for LoRaDRL. We perform extensive experiments, and our results demonstrate that the proposed algorithm not only significantly improves long-range wide area network (LoRaWAN)'s packet delivery ratio (PDR) but is also able to support mobile end-devices (EDs) while ensuring lower power consumption. Most previous works focus on proposing different MAC protocols for improving the network capacity. We show that through the use of LoRaDRL, we can achieve the same efficiency with ALOHA while moving the complexity from EDs to the gateway thus making the EDs simpler and cheaper. Furthermore, we test the performance of LoRaDRL under large-scale frequency jamming attacks and show its adaptiveness to the changes in the environment. We show that LoRaDRL's output improves the performance of state-of-the-art techniques resulting in some cases an improvement of more than 500% in terms of PDR compared to learning-based techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.