Radio spectrum has become a scarce commodity due to the advent of several non-collaborative radio technologies that share the same spectrum. Recognizing a radio technology that accesses the spectrum is fundamental to define spectrum management policies to mitigate interference. State-of-the-art approaches for technology recognition using machine learning are based on supervised learning, which requires an extensive labeled data set to perform well. However, if the technologies and their environment are entirely unknown, the labeling task becomes time-consuming and challenging. In this work, we present a Semisupervised Learning (SSL) approach for technology recognition that exploits the capabilities of modern Software Defined Radios (SDRs) to build large unlabeled data sets of IQ samples but requires only a few of them to be labeled to start the learning process. The proposed approach is implemented using a Deep Autoencoder, and the comparison is carried out against a Supervised Learning (SL) approach using Deep Neural Network (DNN). Using the DARPA Colosseum test bed, we created an IQ sample data set of 16 unknown radio technologies and obtain a classification accuracy of > 97% using the entire labeled data set using both approaches. However, the proposed SSL approach achieves a classification accuracy of ≥ 70% while using only 10% of the labeled data. This performance is equivalent to 4.6x times better classification accuracy than the DNN using the same reduced labeled data set. More importantly, the proposed approach is more robust than the DNN under corrupted input, e.g., noisy signals, which gives us to 2x and 3x better accuracy at Signal-to-Noise Ratio (SNR) of -5 dB and 0 dB, respectively.
Dynamic Spectrum Access allows using the spectrum opportunistically by identifying wireless technologies sharing the same medium. However, detecting a given technology is, most of the time, not enough to increase spectrum efficiency and mitigate coexistence problems due to radio interference. As a solution, recognizing traffic patterns may lead to select the best time to access the shared spectrum optimally. To this extent, we present a traffic recognition approach that, to the best of our knowledge, is the first non-intrusive method to detect traffic patterns directly from the radio spectrum, contrary to traditional packet-based analysis methods. In particular, we designed a Deep Learning (DL) architecture that differentiates between Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) traffic, burst traffic with different duty cycles, and traffic with varying rates of transmission. As input to these models, we explore the use of images representing the spectrum in time and time-frequency. Furthermore, we present a novel data randomization approach to generate realistic synthetic data that combines two state-of-the-art simulators. Finally, we show that after training and testing our models in the generated dataset, we achieve an accuracy of ≥ 96 % and outperform state-of-the-art methods based on IP-packets with DL.
Traditionally, the radio spectrum has been allocated statically. However, this process has become obsolescence as most of the allocated spectrum is underutilized, and the part of the spectrum that is mainly used by the technologies that we use for daily communication is over-utilized. As a result, there is a shortage of available spectrum to deploy emerging technologies like 5G that require high demands on data. Several global efforts are addressing this problem, i.e., the Citizens Broadband Radio Service (CBRS) and Licensed Shared Access (LSA) bands, to increase the spectrum reuse by providing multi-tiers spectrum sharing frameworks in the re-allocated radio spectrum. However, these approaches suffer from two main problems. First, this is a slow process that may take years before authorities can reassign the spectrum to new uses. Second, they do not scale fast since it requires a centralized infrastructure to protect the legacy technology and coordinate and grant access to the shared spectrum. As a solution, the Spectrum Collaboration Challenge (SC2) challenge has shown that Collaborative Intelligent Radio Network (CIRN), i.e., Artificial Intelligence (AI)based autonomous wireless radio technologies that collaborate, can share and reuse spectrum efficiently without any coordination and with the guarantee of incumbent protection. In this paper, we present the architectural design and the experimental validation of an incumbent protection system for the next generation of spectrum sharing frameworks. The proposed system is a twostep AI-based algorithm that recognizes, learns, and proactively predicts the transmission pattern of the incumbent in near real-time, less than 300 ms to perform a prediction, with an accuracy above 95% to correctly predict where the incumbent is transmitting in the future. The proposed algorithm was validated in Colosseum, the RF channel emulator built for the SC2 competition, using up to two incumbents simultaneously, which have different transmission patterns, and sharing spectrum with up to 5 additional networks.
IEEE 802.11 (Wi-Fi) is one of the technologies that provides high performance with a high density of connected devices to support emerging demanding services, such as virtual and augmented reality. However, in highly dense deployments, Wi-Fi performance is severely affected by interference. This problem is even worse in new standards, such as 802.11n/ac, where new features such as Channel Bonding (CB) are introduced to increase network capacity but at the cost of using wider spectrum channels. Finding the best channel assignment in dense deployments under dynamic environments with CB is challenging, given its combinatorial nature. Therefore, the use of analytical or system models to predict Wi-Fi performance after potential changes (e.g., dynamic channel selection with CB, and the deployment of new devices) are not suitable, due to either low accuracy or high computational cost. This paper presents a novel, data-driven approach to speed up this process, using a Graph Neural Network (GNN) model that exploits the information carried in the deployment’s topology and the intricate wireless interactions to predict Wi-Fi performance with high accuracy. The evaluation results show that preserving the graph structure in the learning process obtains a 64% increase versus a naive approach, and around 55% compared to other Machine Learning (ML) approaches when using all training features.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.