By predicting the traffic load on network links, a network operator can effectively pre-dispose resource-allocation strategies to early address, e.g., an incoming congestion event. Traffic loads on different links of a telecom is know to be subject to strong correlation, and this correlation, if properly represented, can be exploited to refine the prediction of future congestion events. Machine Learning (ML) represents nowadays the state-of-the-art methodology for discovering complex relations among data. However, ML has been traditionally applied to data represented in the Euclidean space (e.g., to images) and it may not be straightforward to effectively employ it to model graph-stuctured data (e.g., as the events that take place in telecom networks). Recently, several ML algorithms specifically designed to learn models of graph-structured data have appeared in the literature. The main novelty of these techniques relies on their ability to learn a representation of each node of the graph considering both its properties (e.g., features) and the structure of the network (e.g., the topology). In this paper, we employ a recently-proposed graph-based ML algorithm, the Diffusion Convolutional Recurrent Neural Network (DCRNN), to forecast traffic load on the links of a real backbone network. We evaluate DRCNN's ability to forecast the volume of expected traffic and to predict events of congestion, and we compare this approach to other existing approaches (as LSTM, and Fully-Connected Neural Networks). Results show that DCRN outperforms the other methods both in terms of its forecasting ability (e.g., MAPE is reduced from 210% to 43%) and in terms of the prediction of congestion events, and represent promising starting point for the application of DRCNN to other network management problems.
Deep Reinforcement Learning (DRL) is rising as a promising tool for solving optimization problems in optical networks. Though studies employing DRL for solving static optimization problems in optical networks are appearing, assessing strengths and weaknesses of DRL with respect to state-of-theart solution methods is still an open research question. In this work, we focus on Routing and Wavelength Assignment (RWA), a well-studied problem for which fast and scalable algorithms leading to better optimality gaps are always sought for. We develop two different DRL-based methods to assess the impact of different design choices on DRL performance. In addition, we propose a Multi-Start approach that can improve the average DRL performance, and we engineer a shaped reward that allows efficient learning in networks with high link capacities. With Multi-Start, DRL gets competitive results with respect to a state-of-the-art Genetic Algorithm with significant savings in computational times. Moreover, we assess the generalization capabilities of DRL to traffic matrices unseen during training, in terms of total connection requests and traffic distribution, showing that DRL can generalize on small to moderate deviations with respect to the training traffic matrices. Finally, we assess DRL scalability with respect to topology size and link capacity.
In recent years, researchers realized that the analysis of traffic datasets can reveal valuable information for the management of mobile and metro-core networks. That is getting more and more true with the increase in the use of social media and Internet applications on mobile devices. In this work, we focus on deep learning methods to make prediction of traffic matrices that allow us to proactively optimize the resource allocations of optical backbone networks. Recurrent Neural Networks (RNNs) are designed for sequence prediction problems and they achieved great results in the past years in tasks like speech recognition, handwriting recognition and prediction of time series data. We investigated a particular type of RNN, the Gated Recurrent Units (GRU), able to achieve great accuracy (<7.4 of mean absolute error). Then, we used the predictions to dynamically and proactively allocate the resources of an optical network. Comparing numerical results of static vs dynamic allocation based on predictions, we can estimate a saving of 66.3% of the available capacity in the network, managing unexpected traffic peaks.
We demonstrate a Machine-Learning-based routing module for software-defined networks. By training with the optimal routing solutions of historical traffic traces, the module can classify traffic matrices to provide real-time routing decisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.