In order to effectively keep pace with the global IP traffic growth forecasted in the years to come, Flex-Grid over Multi-Core Fiber (MCF) networks can bring superior spectrum utilization flexibility, as well as bandwidth scalability far beyond the non-linear Shannon's limit. In such a network scenario, however, full node switching reconfigurability will require an enormous node complexity, pushing the limits of current optical device technologies at expenses of prohibitive capital expenditures. Therefore, cost-effective node solutions will most probably be the key enablers of Flex-Grid/MCF networks, at least in the shortand mid-term future. In this context, this paper proposes a cost-effective Reconfigurable Optical Add/Drop Multiplexer (ROADM) architecture for Flex-Grid/MCF networks, called CCC-ROADM, which reduces technological requirements (and associated costs) in exchange of demanding core continuity along the end-to-end communication. To assess the performance of the proposed CCC-ROADM in comparison with a fully-flexible ROADM (i.e., a Fully Non-Blocking ROADM, called FNB-ROADM in this work) in large-scale network scenarios, a novel lightweight heuristic to solve the route, modulation, core and spectrum assignment (RMCSA) problem in Flex-Grid/MCF networks is presented in this work, whose goodness is successfully validated against optimal ILP formulations previously proposed for the same goal. The obtained numerical results in a significant number of representative network topologies with different MCF configurations of 7, 12 and 19 cores show almost identical network performance in terms of maximum network throughput when deploying CCC-ROADMs vs. FNB-ROADMs, while decreasing network capital expenditures to a large extent.
ata centers (DCs) are currently the largest closedloop systems in the information technology (IT) and networking worlds, continuously growing toward multi-million-node clouds [1]. DC operators manage and control converged IT and network infrastructures in order to offer a broad range of services and applications to their customers. Typical services and applications provided by current DCs range from traditional IT resource outsourcing (storage, remote desktop, disaster recovery, etc.) to a plethora of web applications (e.g., browsers, social networks, online gaming). Innovative applications and services are also gaining momentum to the point that they will become main representatives of future DC workloads. Among them, we can find high-performance computing (HPC) and big data applications [2]. HPC encompasses a broad set of computationally intensive scientific applications, aiming to solve highly complex problems in the areas of quantum mechanics, molecular modeling, oil and gas exploration, and so on. Big data applications target the analysis of massive amounts of data collected from people on the Internet to analyze and predict their behavior.All these applications and services require huge data exchanges between servers inside the DC, supported over the DC network (DCN): the intra-DC communication network. The DCN must provide ultra-large capacity to ensure high throughput between servers. Moreover, very low latencies are mandatory, particularly in HPC where parallel computing tasks running concurrently on multiple servers are tightly interrelated. Unfortunately, current multi-tier hierarchical tree-based DCN architectures relying on Ethernet or Infiniband electronic switches suffer from bandwidth bottlenecks, high latencies, manual operation, and poor scalability to meet the expected DC growth forecasts [3].These limitations have mandated a renewed investigation D Abstract Applications running inside data centers are enabled through the cooperation of thousands of servers arranged in racks and interconnected together through the data center network. Current DCN architectures based on electronic devices are neither scalable to face the massive growth of DCs, nor flexible enough to efficiently and cost-effectively support highly dynamic application traffic profiles. The FP7 European Project LIGHTNESS foresees extending the capabilities of today's electrical DCNs through the introduction of optical packet switching and optical circuit switching paradigms, realizing together an advanced and highly scalable DCN architecture for ultra-high-bandwidth and low-latency server-to-server interconnection. This article reviews the current DC and high-performance computing (HPC) outlooks, followed by an analysis of the main requirements for future DCs and HPC platforms. As the key contribution of the article, the LIGHTNESS DCN solution is presented, deeply elaborating on the envisioned DCN data plane technologies, as well as on the unified SDN-enabled control plane architectural solution that will empower OPS and OCS transm...
Space Division Multiplexing (SDM) appears as a promising solution to overcome the capacity limits of single-mode optical fibers. In Flex-Grid/SDM optical networks, nodes offering full interconnection between input/output fiber ports and spatial channels, typical SDM-Reconfigurable Optical Add/Drop Multiplexer (SDM-ROADM) referred to as independent switching with lane support (InS with LC support), require very complex and expensive node architectures. Alternative designs have been proposed to relax their requirements, such as those realizing Joint-switching (JoS) by switching one spectrum slice across all spatial channels at once. In this work, we evaluate the benefits of a cost-effective SDM-ROADM architecture that makes a trade-off between (i) performance in terms of network throughput and (ii) architectural complexity by forcing the Space Continuity Constraint (SCC) end-to-end, that is, along the connection physical path. The performance and architectural complexity of such a SDM-ROADM solution are compared in dynamic Flex-Grid/SDM scenarios against benchmark networks based on InS with LC support and JoS SDM-ROADMs, under both spatial and spectral super-channels. We quantify the network throughput when scaling the spatial multiplicity from 7 to 30 spatial channels, considering Multi-Fiber (MF) as well as Multi-Core Fiber (MCF) SDM solutions. The obtained results reveal that differences in terms of network throughput employing InS without LC support SDM-ROADMs is merely up to 14% lower than InS with LC support SDM-ROADMs, while the network CAPEX can be dramatically reduced by 86%. In contrast, networks employing InS without LC support SDM-ROADMs carry up to 40% higher throughput than JoS ones, whereas the network CAPEX can be raised up to 3x. This paper also analyses the spatial multiplicity impact on both network metrics (throughput and CAPEX).
The majority of the research studies on Flex-Grid over multi-core fiber (Flex-Grid/MCF) networks are built on the assumption of fully non-blocking ROADMs (FNB-ROADMs), able to switch any portion of the spectrum from any input core of any input fiber to any output core of any output fiber. Such flexibility comes at an enormous extra hardware cost. In this paper, we explore the trade-off of using ROADMs that impose the so-called core continuity constraint (CCC). Namely, a CCC-ROADM can switch spectrum from a core on an input fiber to a chosen output fiber, but cannot choose the specific output core. For instance, if all fibers have the same number of cores, the i-th core in the input fibers can be just switched to the i-th core in the output fibers. To evaluate the performance vs. cost trade-off of using CCC-ROADMs, we present two Integer Linear Programming (ILP) formulations for optimally allocating incoming demands in Flex-Grid/MCF networks, where the CCC constraint is imposed or not, respectively. A set of results are extracted applying both schemes in two different backbone networks. Transmission reach estimations are conducted accounting for the fiber's linear and non-linear effects, as well as the inter-core crosstalk (ICXT) impairment introduced by laboratory MCF prototypes of 7, 12 and 19 cores. Our numerical evaluations show that the performance penalty of CCC is minimal, i.e., below 1% for 7 and 12-core MCF and up to 10% for 19-core MCF, while the cost reduction is large. In addition, results reveal that the ICXT effect can be significant when the number of cores per MCF is high, up to a point that equipping the network with 12-core MCFs can yield superior effective capacity than with 19-core MCFs.
This is a post-peer-review, pre-copyedit version of an article published in Photonic Network Communications. The final authenticated version is available online at: https://doi.org/10.1007/s11107-017-0717-9.In an increasingly competitive market environment with smaller product offer differentiation, a continuous maximization of efficiency, while guarantying the quality of the provided services, remains a main objective for any telecom operator. In this work, we address the reduction of the operational costs of the optical transport network as one of the possible fields of action to achieve this aim. We propose to apply cognitive science for reducing these costs, specifically by reducing operation margins. We base our work on the case-based reasoning technique by proposing several new schemes to reduce the operation margins established during the design and commissioning phases of the optical links power budgets. From the obtained results, we find that our cognitive proposal provides a feasible solution allowing significant savings on transmitted power that can reach a 49%. We show that there is a certain dependency on network conditions, achieving higher efficiency in low loaded networks where improvements can raise up to 53%.Peer ReviewedPostprint (author's final draft
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.