Dynamic Spectrum Access allows using the spectrum opportunistically by identifying wireless technologies sharing the same medium. However, detecting a given technology is, most of the time, not enough to increase spectrum efficiency and mitigate coexistence problems due to radio interference. As a solution, recognizing traffic patterns may lead to select the best time to access the shared spectrum optimally. To this extent, we present a traffic recognition approach that, to the best of our knowledge, is the first non-intrusive method to detect traffic patterns directly from the radio spectrum, contrary to traditional packet-based analysis methods. In particular, we designed a Deep Learning (DL) architecture that differentiates between Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) traffic, burst traffic with different duty cycles, and traffic with varying rates of transmission. As input to these models, we explore the use of images representing the spectrum in time and time-frequency. Furthermore, we present a novel data randomization approach to generate realistic synthetic data that combines two state-of-the-art simulators. Finally, we show that after training and testing our models in the generated dataset, we achieve an accuracy of ≥ 96 % and outperform state-of-the-art methods based on IP-packets with DL.
IEEE 802.11 (Wi-Fi) is one of the technologies that provides high performance with a high density of connected devices to support emerging demanding services, such as virtual and augmented reality. However, in highly dense deployments, Wi-Fi performance is severely affected by interference. This problem is even worse in new standards, such as 802.11n/ac, where new features such as Channel Bonding (CB) are introduced to increase network capacity but at the cost of using wider spectrum channels. Finding the best channel assignment in dense deployments under dynamic environments with CB is challenging, given its combinatorial nature. Therefore, the use of analytical or system models to predict Wi-Fi performance after potential changes (e.g., dynamic channel selection with CB, and the deployment of new devices) are not suitable, due to either low accuracy or high computational cost. This paper presents a novel, data-driven approach to speed up this process, using a Graph Neural Network (GNN) model that exploits the information carried in the deployment’s topology and the intricate wireless interactions to predict Wi-Fi performance with high accuracy. The evaluation results show that preserving the graph structure in the learning process obtains a 64% increase versus a naive approach, and around 55% compared to other Machine Learning (ML) approaches when using all training features.
Recently, the operation of LTE in unlicensed bands has been proposed to cope with the ever-increasing mobile traffic demand. However, the deployment of LTE in such bands implies sharing spectrum with mature technologies such as Wi-Fi. Several studies have discussed this coexistence problem by suggesting that LTE implements different adaptation mechanisms that allow transmission possibilities to Wi-Fi. While such adaptation mechanisms exist, they still negatively impact Wi-Fi performance, mainly due to the lack of collaboration/coordination mechanisms that inform about the co-located networks' activities. In this paper, we propose a distributed spectrum management framework that enhances the performance of Wi-Fi, as a particular case, by detecting harmful co-located wireless networks and changes the Wi-Fi's operating central frequency to avoid them. The framework is based on a Convolutional Neural Network (CNN) that can identify different wireless technologies and provides spectrum usage statistics. Experiments were carried out in a real-life testbed, and the results show that Wi-Fi maintains its performance when using our framework. This translates in an increase of at least 40% on the overall throughput compared to a non-managed operation of Wi-Fi.
With the advent of Artificial Intelligence (AI)-empowered communications, industry, academia, and standardization organizations are progressing on the definition of mechanisms and procedures to address the increasing complexity of future 5G and beyond communications. In this context, the International Telecommunication Union (ITU) organized the First AI for 5G Challenge to bring industry and academia together to introduce and solve representative problems related to the application of Machine Learning (ML) to networks. In this paper, we present the results gathered from Problem Statement 13 (PS-013), organized by Universitat Pompeu Fabra (UPF), whose primary goal was predicting the performance of next-generation Wireless Local Area Networks (WLANs) applying Channel Bonding (CB) techniques. In particular, we provide an overview of the ML models proposed by participants (including artificial neural networks, graph neural networks, random forest regression, and gradient boosting) and analyze their performance on an open data set generated using the IEEE 802.11ax-oriented Komondor network simulator. The accuracy achieved by the proposed methods demonstrates the suitability of ML for predicting the performance of WLANs. Moreover, we discuss the importance of abstracting WLAN interactions to achieve better results, and we argue that there is certainly room for improvement in throughput prediction through ML.
Next-generation communication systems will face new challenges related to efficiently managing the available resources, such as the radio spectrum. DL is one of the optimization approaches to address and solve these challenges. However, there is a gap between research and industry. Most AI models that solve communication problems cannot be implemented in current communication devices due to their high computational capacity requirements. New approaches seek to reduce the size of DL models through quantization techniques, changing the traditional method of operations from a 32 (or 64) floating-point representation to a fixed point (usually small) one. However, there is no analytical method to determine the level of quantification that can be used to obtain the best trade-off between the reduction of computational costs and an acceptable accuracy in a specific problem. In this work, we propose an analysis methodology to determine the degree of quantization in a DNN model to solve the problem of AMR in a radio system. We use the Brevitas framework to build and analyze different quantized variants of the DL architecture VGG10 adapted to the AMR problem. The evaluation of the computational cost is performed with the FINN framework of Xilinx Research Labs to obtain the computational inference cost. The proposed design methodology allows us to obtain the combination of quantization bits per layer that provides an optimal trade-off between the model performance (i.e., accuracy) and the model complexity (i.e., size) according to a set of weights associated with each optimization objective. For example, using the proposed methodology, we found a model architecture that reduced 75.8% of the model size compared to the non-quantized baseline model, with a performance degradation of only 0.06%.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.