In order to facilitate flexible network service virtualization and migration, network functions (NFs) are increasingly executed by software modules as so-called "softwarized NFs" on General-Purpose Computing (GPC) platforms and infrastructures. GPC platforms are not specifically designed to efficiently execute NFs with their typically intense Input/Output (I/O) demands. Recently, numerous hardwarebased accelerations have been developed to augment GPC platforms and infrastructures, e.g., the central processing unit (CPU) and memory, to efficiently execute NFs. This article comprehensively surveys hardware-accelerated platforms and infrastructures for executing softwarized NFs. This survey covers both commercial products, which we consider to be enabling technologies, as well as relevant research studies. We have organized the survey into the main categories of enabling technologies and research studies on hardware accelerations for the CPU, the memory, and the interconnects (e.g., between CPU and memory), as well as custom and dedicated hardware accelerators (that are embedded on the platforms); furthermore, we survey hardware-accelerated infrastructures that connect GPC platforms to networks (e.g., smart network interface cards). We find that the CPU hardware accelerations have mainly focused on extended instruction sets and CPU clock adjustments, as well as cache coherency. Hardware accelerated interconnects have been developed for on-chip and chip-to-chip connections. Our comprehensive up-to-date survey identifies the main trade-offs and limitations of the existing hardware-accelerated platforms and infrastructures for NFs and outlines directions for future research.
The computing capabilities of client devices are continuously increasing; at the same time, demands for ultra-low latency (ULL) services are increasing. These ULL services can be provided by migrating some micro-service container computations from the cloud and multi-access edge computing (MEC) to the client devices. The migration of a container image requires compression and decompression, which are computationally demanding. We quantitatively examine the hardware acceleration of container image compression and decompression on a client device. Specifically, we compare the Intel ® Quick Assist Technology (QAT) hardware acceleration with software compression/decompression. For scenarios with a local container image registry (i.e., without network bandwidth constraints), we find that QAT speeds up compression by a factor of over 7 compared to the single-core GZIP software and reduces the CPU core utilization by over 15% for large container images. These QAT benefits come at the expense of Input/Output (IO) memory access bitrates of up to 900 Mbyte/s (while the software compression/decompression does not require IO memory access). For scenarios with a remote container image registry, we find that the container push (compression) time savings increase with the network bandwidth, while the container pull (decompression) time savings level out for moderately high network bandwidths and slightly decrease for a very high network bandwidth. Furthermore, QAT acceleration achieves substantial power consumption reductions for container push compression for low to moderately high network bandwidths. Our evaluation results give reference performance benchmarks of the achievable latencies for container image instantiation and migration with and without hardware acceleration of the compression and decompression of container images.
Scalable and flexible communication networks increasingly conduct the packet processing for Network Functions (NFs) in General Purpose Computing (GPC) platforms. The input/output (I/O)-intensive and latency-sensitive packet processing is challenging for the operating systems and hypervisors running on GPC platforms. This article surveys the existing enabling technologies and research studies on operating system and hypervisor aspects that directly influence the packet processing for NFs on GPC platforms. We organize this survey according to the main categories abstraction approach, memory access, and I/O strategy. We further categorize abstraction approach technologies and research studies into the categories operation systems, hypervisors, and containers. We partition the memory access category into the two subcategories of memory allocation and memory access, while we partition the I/O strategy category into the sub-categories I/O device virtualization and I/O device access. Our survey gives a comprehensive summary of the capabilities and limitations of the existing enabling technologies and researched approaches for abstraction, memory access, and I/O for NF packet processing. We outline critical future research directions for advancing NF packet processing on GPC platforms.
With the emergence of small cell networks and fifth-generation (5G) wireless networks, the backhaul becomes increasingly complex. This study addresses the problem of how a central SDN orchestrator can flexibly share the total backhaul capacity of the various wireless operators among their gateways and radio nodes (e.g., LTE enhanced Node Bs or Wi-Fi access points). In order to address this backhaul resource allocation problem, we introduce a novel backhaul optimization methodology in the context of the recently proposed LayBack SDN backhaul architecture. In particular, we explore the decomposition of the central optimization problem into a layered dual decomposition model that matches the architectural layers of the LayBack backhaul architecture. In order to promote scalability and responsiveness, we employ different timescales, i.e., fast timescales at the radio nodes and slower timescales in the higher LayBack layers that are closer to the central SDN orchestrator. We numerically evaluate the scalable layered optimization for a specific case of the LayBack backhaul architecture with four layers, namely a radio node (eNB) layer, a gateway layer, an operator layer, and central coordination in an SDN orchestrator layer. The coordinated sharing of the total backhaul capacity among multiple operators lowers the queue lengths compared to the conventional backhaul without sharing among operators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.