The Novel Enablers for Cloud Slicing (NECOS) project addresses the limitations of current cloud computing infrastructures to respond to the demand for new services, as presented in two use-cases, that will drive the whole execution of the project. The first use-case is focused on Telco service provider and is oriented towards the adoption of cloud computing in their large networks. The second use-case is targeting the use of edge clouds to support devices with low computation and storage capacity. The envisaged solution is based on a new concept, the Lightweight Slice Defined Cloud (LSDC), as an approach that extends the virtualization to all the resources in the involved networks and data centers and provides uniform management with a high-level of orchestration. In this position paper, we discuss the motivation, objectives, architecture, research challenges (and how to overcome them) and initial efforts for the NECOS project.
In the context of 5G networks, the concept of network slicing allows network providers to flexibly share infrastructures with mobile service providers and verticals. While this concept has been widely investigated considering mostly the network issues, in this work we focus on a slice as a service model that takes into account the data center (DC) perspective. In particular, we propose an architecture where DC slices are created over transformable (compute and storage) resources, which can be virtualized or de-virtualized on-demand. Then, on top of each slice, an on-demand VIM is instantiated to control the allocated resources. As a realization of this architecture, we introduce the DC Slice Controller, a system able to deploy and delivery full operational VIMs based on generic templates. We evaluate the effectiveness of the proposed system deploying three VIMs (VLSP, Kubernetes, and OpenStack) over commodity hardware. Experimental results show that the DC Slice Controller can timely provide a slice even when dealing with sophisticated VIMs such as OpenStack. As an example, we were able to delivery a fully functional OpenStack in four nodes in less than 10 minutes.
Cloud-network slicing is a promising approach to serve vertical industries delivering their services over multiple administrative and technological domains. However, there are numerous open challenges to provide end-to-end slices due to complex business and engineering requirements from service and resource providers. This article presents a reference architecture for the cloud-network slicing concept and the practical realization of the Slice-as-a-Service paradigm, which are key results from the NECOS (Novel Enablers in Cloud Slicing) project. The NECOS platform has been designed to consider modularity, separation of concerns, and multi-domain dynamic operation as prime attributes. The architecture comprises a set of inter-working components to automatically create, manage, and decommission end-to-end Cloud-Network slice instances in a lightweight manner. NECOS orchestrates slices at run-time, spanning across core / edge data centres and wired / wireless network infrastructures. The novelties of the multi-domain NECOS platform are validated through three proof-of-concept experiments: (i) a touristic content delivery service slice deployment featuring on-demand virtual infrastructure management across three countries in different continents to meet particular slice requirements; (ii) intelligent slice elasticity driven by machinelearning techniques; and (iii) marketplace-based resource discovery capabilities.
O projeto Novel Enablers for Cloud Slicing (NECOS) propo ̃e uma soluc ̧a ̃o que visa automatizar o processo de configurac ̧a ̃o otimizada de nuvem e rede, fornecendo um gerenciamento uniforme com um alto n ́ıvel de autonomia para os recursos de computac ̧a ̃o, conectividade e armazenamento atualmente separados, baseado no conceito LSDC (Lightweight Slice Defined Cloud). Neste artigo, discute-se a motivac ̧a ̃o, objetivos, arquitetura, desafios de pesquisa e esforc ̧os iniciais do projeto NECOS atrave ́s dos casos de uso definidos.
Future wireless communication infrastructures, starting from 5G, will operate their radio access networks (RANs) based on virtualized functions distributed over a crosshaul, i.e., a transport solution integrating fronthaul and backhaul. Optimizing the resource allocation and positioning of the virtual network functions of a virtualized RAN (vRAN) is crucial to improve performance. In this paper, we propose a new optimization model to deal with VRAN functions allocation and positioning that seeks to maximize the level of centralization. Our model explores several representative functional splits, including the fully distributed remote unit (UK), while taking into account the limit imposed by the communication paths between the crosshaul and the core network. We compare our model with a state-of-the-art solution and show how our approach improves the centralization level in most of the scenarios, even considering the limit imposed by the core infrastructure. Our model also provides higher number of feasible solutions in most of the cases. Additionally, we investigate the positioning of the central unit (CU) and show that its placement with the core infrastructure is rarely the best choice.
A major objective of the Brazil-EU FIBRE project is the deployment in Brazil of FIT@BR, a wide-area network testbed to support user experimentation in the design and validation of new network architectures and applications. In such a testbed, a high degree of automated resource sharing between experimenters is required, and the testbed itself must be instrumented so that precise measurements and accounting of both user and facility resources may be carried out. In this article, we describe the design and implementation of the Control and Monitoring Framework (CMF) for the FIT@BR testbed, which is based on three CMFs developed in existing testbed projects. In order to take best advantage of different testbed functionalities at different sites, FIT@BR is being created as a federated testbed, which will facilitate future interoperation with international initiatives.
Abstract-One of the major problems in managing largescale distributed systems is the prediction of the application performance. The complexity of the systems and the availability of monitored data have motivated the applicability of machine learning and other statistical techniques to induce performance models and forecast performance degradation problems. However, there is a stringent need for additional experimental and comparative studies, since there is no optimal method for all cases. In addition to a deeper comparison of different statistical techniques, studies lack on two important dimensions: resilience to transient failures of the statistical techniques, and diagnostic abilities. In this work, we address these issues, presenting three main contributions: first, we establish the capability of different statistical learning techniques for forecasting the resource needs of component-based distributed systems; second, we investigate an analysis engine that is more robust to false alarms, introducing a novel algorithm that augments the predictive power of statistical learning methods by combining them with a statistical test to identify trends in resources usage; third, we investigate the applicability of statistical tests for identifying the nature and cause of performance problems in component-based distributed systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.