Absfruct-This paper deals with the most common protection schemes in WDM optical networks, providing for each of them an algebraic formulation of availability analysis. We consider single or multiple link failure scenarios, being a link failure a fault that affects all the optical connections routed on the involved link. Availability models are applied to some numerical examples that allow us to compare the different availability degrees granted by each protection technique. When an approximation is introduced in the presented formulas, Monte-Carloapproach simulation results are given to verify the accuracy of the theoretical analysis. The paper highlights some important availability relations between path-protection schemes and most relevant network parameters.
Abstract-A design technique for reliable optical transport networks is presented. The network is first dimensioned in order to carry a given set of static protected optical connections, each one routed maximizing its availability. The network can be further optimized by minimizing the number of fibers to be installed, while keeping a control on connection availability, which can remain the same or decrease by a prefixed margin factor. Design and optimization algorithms are provided for networks adopting dedicated and shared path-protection. The optimization approach is heuristic. Results obtained by applying the proposed technique to two casestudy networks are shown and discussed. These two case-study experiments are carried out exploiting a realistic model to evaluate terrestrial and submarine optical link availability.Index Terms-Deterministic network calculus, mathematical programming/optimization, system design.
Paradoxically, with an ever-increasing traffic demand, today transport-network operators experience a progressive erosion of their margins. The alarms of change are set, and Software Define Networking (SDN) is coming to the rescue with the promise of reducing Capital expenditures (CapEx) and Operational expenses (OpEx). Driven by economic needs and network innovation facilities, today transport SDN (T-SDN) is a reality. It gained big momentum in the last years, however in the networking industry, the transport network will be perhaps the last segment to embrace SDN, mainly due to the heterogeneous nature and complexity of the optical equipment composing it. This survey guides the reader through a fascinating technological adventure that provides an organic analysis of the T-SDN development and evolution considering contributions from: academic research, standardization bodies, industrial development, open source projects and alliances among them. After creating a comprehensive picture of T-SDN, we provide an analysis of many open issues that are expected to need significant future work, and give our vision in this path towards a fully programmable and dynamic transport network.
By predicting the traffic load on network links, a network operator can effectively pre-dispose resource-allocation strategies to early address, e.g., an incoming congestion event. Traffic loads on different links of a telecom is know to be subject to strong correlation, and this correlation, if properly represented, can be exploited to refine the prediction of future congestion events. Machine Learning (ML) represents nowadays the state-of-the-art methodology for discovering complex relations among data. However, ML has been traditionally applied to data represented in the Euclidean space (e.g., to images) and it may not be straightforward to effectively employ it to model graph-stuctured data (e.g., as the events that take place in telecom networks). Recently, several ML algorithms specifically designed to learn models of graph-structured data have appeared in the literature. The main novelty of these techniques relies on their ability to learn a representation of each node of the graph considering both its properties (e.g., features) and the structure of the network (e.g., the topology). In this paper, we employ a recently-proposed graph-based ML algorithm, the Diffusion Convolutional Recurrent Neural Network (DCRNN), to forecast traffic load on the links of a real backbone network. We evaluate DRCNN's ability to forecast the volume of expected traffic and to predict events of congestion, and we compare this approach to other existing approaches (as LSTM, and Fully-Connected Neural Networks). Results show that DCRN outperforms the other methods both in terms of its forecasting ability (e.g., MAPE is reduced from 210% to 43%) and in terms of the prediction of congestion events, and represent promising starting point for the application of DRCNN to other network management problems.
In distributed SDN architectures, the network is controlled by a cluster of multiple controllers. This distributed approach permits to meet the scalability and reliability requirements of large operational networks. Despite that, a logical centralized view of the network state should be guaranteed, enabling the simple development of network applications. Achieving a consistent network state requires a consensus protocol, which generates control traffic among the controllers whose timely delivery is crucial for network performance. We focus on the state-of-art ONOS controller, designed to scale to large networks, based on a cluster of self-coordinating controllers, and concentrate on the inter-controller control traffic. Based on real traffic measurements, we develop a model to quantify the traffic exchanged among the controllers, which depends on the topology of the controlled network. This model is useful to design and dimension the control network interconnecting the controllers.
In recent years, researchers realized that the analysis of traffic datasets can reveal valuable information for the management of mobile and metro-core networks. That is getting more and more true with the increase in the use of social media and Internet applications on mobile devices. In this work, we focus on deep learning methods to make prediction of traffic matrices that allow us to proactively optimize the resource allocations of optical backbone networks. Recurrent Neural Networks (RNNs) are designed for sequence prediction problems and they achieved great results in the past years in tasks like speech recognition, handwriting recognition and prediction of time series data. We investigated a particular type of RNN, the Gated Recurrent Units (GRU), able to achieve great accuracy (<7.4 of mean absolute error). Then, we used the predictions to dynamically and proactively allocate the resources of an optical network. Comparing numerical results of static vs dynamic allocation based on predictions, we can estimate a saving of 66.3% of the available capacity in the network, managing unexpected traffic peaks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.