Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management.The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation.In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions.
Predicting the Quality of Transmission (QoT) of a lightpath prior to its deployment is a step of capital importance for an optimized design of optical networks. Due to the continuous advances in optical transmission, the number of design parameters available to system engineers (say, e.g., modulation formats, baud rate, code rate, etc.) is growing dramatically, thus significantly increasing the alternative scenarios for lightpath deployment. As of today, existing (pre-deployment) estimation techniques for lightpath QoT belong to two categories: "exact" analytical models estimating physical layer impairments, which provide accurate results but incur heavy computational requirements, and margined formulas which are computationally faster, but typically introduce high link margins that lead to underutilization of network resources. In this paper we explore a third option, i.e., Machine Learning (ML), as ML techniques have been already successfully applied for optimization and performance prediction of complex systems where analytical models are hard to derive and/or numerical procedures impose high computational burden. We investigate a ML classifier that predicts whether the bit-error rate of unestablished lightpaths meets the required system threshold, based on traffic volume, desired route and modulation format. The classifier is trained and tested on synthetic data and its performance is assessed over different network topologies and for various combinations of classification features. Results in terms of classifier accuracy are promising and motivate further investigation over real field data.
5G mobile access targets unprecedented performance, not only in terms of higher data rates per user and lower latency, but also in terms of network intelligence and capillarity. To achieve this, 5G networks will resort to solutions as small cell deployment, multipoint coordination (CoMP, ICIC) and centralized radio access network (C-RAN) with baseband units (BBUs) hotelling. As adopting such techniques requires a high-capacity low-latency access/aggregation network to support backhaul, radio coordination and fronthaul (i.e., digitized baseband signal) traffic, optical access/aggregation networks based on wavelength division multiplexing (WDM) are considered as an outstanding candidate for 5G-transport. By physically separating BBUs from the corresponding cell sites, BBU hotelling promises substantial savings in terms of cost and power consumption. However, this requires to insert additional high bit-rate traffic, i.e., the fronthaul, which also has very strict latency requirements. Therefore, a tradeoff between the number of BBU-hotels (BBU consolidation), the fronthaul latency and network-capacity utilization arises. We introduce the novel BBU-placement optimization problem for C-RAN deployment over a WDM aggregation network and formalize it by integer linear programming. Thus, we evaluate the impact of 1) jointly supporting converged fixed and mobile traffic, 2) different fronthaul-transport options (namely, OTN and Overlay) and 3) joint optimization of BBU and electronic switches placement, on the amount of BBU consolidation achievable on the aggregation network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.