LoRaWAN is a media access control (MAC) protocol for wide area networks. It is designed to allow low-powered devices to communicate with Internet-connected applications over long-range wireless connections. The targeted dense deployment will inevitably cause a shortage in radio resources. Hence, autonomous and lightweight radio resource management is crucial to offer ultra-long battery lifetime for LoRa devices. One of the most promising solutions to such a challenge is the use of artificial intelligence. This will enable LoRa devices to use innovative and inherently distributed learning techniques, thus freeing them from draining their limited energy by constantly communicating with a centralized controller.Before proceeding with the deployment of self-managing solutions on top of a LoRaWAN application, it is sensible to conduct simulation-based studies to optimize the design of learningbased algorithms as well as the application under consideration. Unfortunately, a network simulator for such a context is not fully considered or lacks real deployment parameters. In order to address this shortcoming, we have developed a LoRaWAN simulator which aims for resources allocation problem in LoRaWAN network. The Multi-Armed Bandit and its reinforcement learning based algorithm are used to formulate and finding a resource allocation solution. To demonstrate the usefulness of our simulator, extensive simulations were run in a realistic environment taking into account physical phenomenon in LoRaWAN such as the capture effect and inter-spreading factor interference. The simulation results show that the proposed simulator provides a flexible and efficient environment to evaluate various network design parameters and self-management solutions as well as verify the effectiveness of the distributed learning algorithms for resource allocation problems in LoRaWAN.
For a seamless deployment of the Internet of Things (IoT), self-managing solutions are needed to overcome the challenges of IoT, including massively dense networks and careful management of constrained resources in terms of calculation, memory, and battery. Leveraging on artificial intelligence will enable IoT devices to operate autonomously by using inherently distributed learning techniques. Fully distributed resource management will free devices from draining their limited energy by constantly communicating with a centralized controller. The present work is devoted to a specific IoT context, that of LoRaWAN, where devices communicate with the access network via ALOHA-type access and spread spectrum technology. Concurrent transmissions on different spreading factors increase the network capacity. However, the bottleneck is inevitable with the expected massive deployment of LoRa devices. To address this issue, we resort to the popular EXP3 (Exponential Weights for Exploration and Exploitation) algorithm to steer autonomously the decision of LoRa devices towards the least solicited spreading factors. Furthermore, the spreading factor selection is cast as a proportional fair optimization problem used as a benchmark for the learning-based algorithm. Extensive simulations were run in a realistic environment taking into account physical phenomena in LoRaWAN such as the capture effect and inter-spreading factor collision, as well as non-uniform device distribution. In such a realistic setting, we evaluate the performances of the EXP3.S algorithm, an efficient variant of the EXP3 algorithm, and show its relevance against the fair centralized solution and basic heuristics.
Abstract-Selfish primary user emulation (PUE) is a serious security problem in cognitive radio networks. By emitting emulated incumbent signals, a PUE attacker can selfishly occupy more channels. Consequently, a PUE attacker can prevent other secondary users from accessing radio resources and interfere with nearby primary users. To mitigate the selfish PUE, a surveillance process on occupied channels could be performed. Determining surveillance strategies, particularly in multi-channel context, is necessary for ensuring network operation fairness. Since a rational attacker can learn to adapt to the surveillance strategy, the question is how to formulate an appropriate modeling of the strategic interaction between a defender and an attacker. In this paper, we study the commitment model in which the network manager takes the leadership role by committing to its surveillance strategy and forces the attacker to follow the committed strategy. The relevant strategy is analyzed through the Strong Stackelberg Equilibrium (SSE). Analytical and numerical results suggest that, by playing the SSE strategy, the network manager significantly improves its utility with respect to playing a Nash equilibrium (NE) strategy, hence obtains a better protection against selfish PUEs. Moreover, the computational effort to compute the SSE strategy is lower than to find a NE strategy.
Primary User Emulation (PUE) attack is a serious security problem in cognitive radio (CR) network. A PUE attacker emulates a primary signal during sensing duration in order the CR users not to use the spectrum. The PUE attacker is either selfish if it would like to take benefit of the spectrum, or malicious if it would like to do a Deny of Service of the CR network. In this paper, we only consider malicious PUE. We propose to perform sometimes an additional sensing step, called extra-sensing, in order to have a new opportunity to sense the channel and so to use it. Obviously the malicious PUE may still perform an attack during this extra-sensing. Therefore, our problem can be formulated as a zero-sum game to modeling and analyzing the strategies for two players. The equilibrium is expressed in closed-form. The results show that the benefit ratio and the probability of channel's availability strongly influence the equilibrium. Numerical results confirm our claims.
International audiencePrimary User Emulation Attack (PUEA), in which attackers emulate primary user signals causing restriction of secondary access on the attacked channels, is a serious security problem in Cognitive Radio Networks (CRNs). A user performing a PUEA for selfishly occupying more channels is called a selfish PUEA attacker. Network managers could adopt a surveillance process on disallowed channels for identifying illegal channel occupation of selfish PUEA attackers and hence mitigating selfish PUEA. Determining surveillance strategies, particularly in multichannel context, is necessary for ensuring network operation fairness. In this paper, we formulate a game, called multi-channel surveillance game, between the selfish attack and the surveillance process in multi-channel CRNs. The sequence-form representation method is adopted to determine the Nash Equilibrium (NE) of the game. We show that performing the obtained NE surveillance strategy significantly mitigates selfish PUEA
Deep convolutional neural networks (CNNs) has been developed for a wide range of applications such as image recognition, nature language processing, etc. However, the deployment of deep CNNs in home and mobile devices remains challenging due to substantial requirements for computing resources and energy needed for the computation of high-dimensional convolutions. In this paper, we propose a novel approach designed to minimize energy consumption in the computation of convolutions in deep CNNs. The proposed solution includes (i) an optimal selection method for Fast Fourier Transform (FFT) configuration associated with splitting input feature maps, (ii) a reconfigurable hardware architecture for computing high-dimensional convolutions based on 2D-FFT, and (iii) an optimal pipeline data movement scheduling. The FFT size selecting method enables us to determine the optimal length of the split input for the lowest energy consumption. The hardware architecture contains a processing engine (PE) array, whose PEs are connected to form parallel flexible-length Radix-2 single-delay feedback lines, enabling the computation of variable-size 2D-FFT. The pipeline data movement scheduling optimizes the transition between row-wise FFT and column-wise FFT in a 2D-FFT process and minimizes the required data access for the elementwise accumulation across input channels. Using simulations, we demonstrated that the proposed framework improves the energy consumption by 89.7% in the inference case.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.