The fast increment in the number of IoT (Internet of Things) devices is accelerating the research on new solutions to make cloud services scalable. In this context, the novel concept of fog computing as well as the combined fog-to-cloud computing paradigm is becoming essential to decentralize the cloud, while bringing the services closer to the end-system. This paper surveys on the application layer communication protocols to fulfill the IoT communication requirements, and their potential for implementation in fog-and cloud-based IoT systems. To this end, the paper first briefly presents potential protocol candidates, including request-reply and publish-subscribe protocols. After that, the paper surveys these protocols based on their main characteristics, as well as the main performance issues, including latency, energy consumption and network throughput. These findings are thereafter used to place the protocols in each segment of the system (IoT, fog, cloud), and thus opens up the discussion on their choice, interoperability and wider system integration. The survey is expected to be useful to system architects and protocol designers when choosing the communication protocols in an integrated IoT-to-fog-to-cloud system architecture. Continuous innovations in hardware, software and connection solutions in the last decade have lead to the expansion of the Internet of Things (IoT) with the number of connected devices growing by the day [1] [2]. The huge amount of data generated by these devices require to find a proper system architecture able to both process and store all the data. While cloud-based architectures are being currently used for that purpose, the new fog computing paradigm is envisioned to scale and optimize the IoT infrastructures [3]. Examples of the cloud-based IoT solutions have been proposed in [4], [5], [6] and a detailed analysis of properties for IoT cloud providers has been conducted in [7]. These studies have shown that cloud computing has the potential to satisfy many IoT requirements, such as monitoring of services, powerful processing of sensor data streams and visualization tasks. On the other hand, fog-based solutions are suited to address real-time processing, fast data response, and latency issues, thus extending the cloud capabilities closer to the edge of the network [8]. Among many factors that will determine the performance in a combined IoT, fog and cloud computing paradigm, the application layer communication, which in turn depends on the selected communication protocols, is one of the main ones.Despite the popularity and wide spread usage of HTTP, the currently used protocols in various domains of IoT, fog and cloud domains are de-facto fragmented with many different solutions. This is due to the different requirements and areas that IoT needs to cover, combining the functionalities of sensors, actuators and computing power with security, connectivity and a myriad of other features. As a result, there is no common agreement on the reference architecture or adopted standards of co...
Network Function Virtualization (NFV) is a new paradigm, enabling service innovation through virtualization of traditional network functions located flexibly in the network in form of Virtual Network Functions (VNFs). Since VNFs can only be placed onto servers located in networked data centers, which is the NFV's salient feature, the traffic directed to these data center areas has significant impact on network load balancing. Network load balancing can be even more critical for an ordered sequence of VNFs, also known as Service Function Chains (SFCs), a common cloud and network service approach today. To balance the network load, VNF's can be placed in a smaller cluster of servers in the network thus minimizing the distance to the data center. The optimization of the placement of these clusters is a challenge as also other factors need to be considered, such as the resource utilization. To address this issue, we study the problem of VNF placement with replications, and especially the potential of VNFs replications to help load balance the network. We design and compare three optimization methods, including Linear Programing (LP) model, Genetic Algorithm (GA) and Random Fit Placement Algorithm (RFPA) for the allocation and replication of VNFs. Our results show that the optimum placement and replication can significantly improve load balancing, for which we also propose a GA heuristics applicable to larger networks.
Abstract-Network Function Virtualization (NFV) is enabling the softwarization of traditional network services, commonly deployed in dedicated hardware, into generic hardware in form of Virtual Network Functions (VNFs), which can be located flexibly in the network. However, network load balancing can be critical for an ordered sequence of VNFs, also known as Service Function Chains (SFCs), a common cloud and network service approach today. The placement of these chained functions increases the ping-pong traffic between VNFs, directly affecting to the efficiency of bandwidth utilization. The optimization of the placement of these VNFs is a challenge as also other factors need to be considered, such as the resource utilization. To address this issue, we study the problem of VNF placement with replications, and especially the potential of VNFs replications to help load balance the network, while the server utilization is minimized. In this paper we present a Linear Programming (LP) model for the optimum placement of functions finding a trade-off between the minimization of two objectives: the link utilization and CPU resource usage. The results show how the model load balance the utilization of all links in the network using minimum resources.
Abstract-In current Data Center Networks (DCNs), EqualCost MultiPath (ECMP) is used as the de-facto routing protocol. However, ECMP does not differentiate between short and long flows, the two main categories of flows depending on their duration (lifetime). This issue causes hot-spots in the network, affecting negatively the Flow Completion Time (FCT) and the throughput, the two key performance metrics in data center networks. Previous work on load balancing proposed solutions such as splitting long flows into short flows, using per-packet forwarding approaches, and isolating the paths of short and long flows. We propose DiffFlow, a new load balancing solution which detects long flows and forwards packets using Random Packet Spraying (RPS) with help of SDN, whereas the flows with small duration are forwarded with ECMP by default. The use of ECMP for short flows is reasonable, as it does not create the out-of-order problem; at the same time, RPS for long flows can efficiently help to load balancing the entire network, given that long flows represent most of the traffic in DCNs. The results show that our DiffFlow solution outperforms both the individual usage of either RPS or ECMP, while the overall throughput achieved is maintained at the level comparable to RPS.
The Network Function Virtualization (NFV) paradigm is enabling flexibility, programmability and implementation of traditional network functions into generic hardware, in form of the so-called Virtual Network Functions (VNFs). Today, cloud service providers use Virtual Machines (VMs) for the instantiation of VNFs in the data center (DC) networks. To instantiate multiple VNFs in a typical scenario of Service Function Chains (SFCs), many important objectives need to be met simultaneously, such as server load balancing, energy efficiency and service execution time. The well-known VNF placement problem requires solutions that often consider migration of virtual machines (VMs) to meet this objectives. Ongoing efforts, for instance, are making a strong case for migrations to minimize energy consumption, while showing that attention needs to be paid to the Quality of Service (QoS) due to service interruptions caused by migrations. To balance the server allocation strategies and QoS, we propose using replications of VNFs to reduce migrations in DC networks. We propose a Linear Programming (LP) model to study a trade-off between replications, which while beneficial to QoS require additional server resources, and migrations, which while beneficial to server load management can adversely impact the QoS. The results show that, for a given objective, the replications can reduce the number of migrations and can also enable a better server and data center network load balancing.
No abstract
Thanks to the latest advances in containerization, the serverless edge computing model is becoming close to reality. Serverless at the edge is expected to enable low latency applications with fast autoscaling mechanisms, all running on heterogeneous and resource-constrained devices. In this work, we engineer and experimentally benchmark a serverless edge computing system architecture. We deploy a decentralized edge computing platform for serverless applications providing processing, storage, and communication capabilities using only open-source software, running over heterogeneous resources (e.g., virtual machines, Raspberry Pis, or bare metal servers, etc). To achieve that, we provision an overlay-network based on Nebula network agnostic technology, running over private or public networks, and use K3s to provide hardware abstraction. We benchmark the system in terms of response times, throughput and scalability using different hardware devices connected through the public Internet. The results show that while serverless is feasible on heterogeneous devices showing a good performance on constrained devices, such as Raspberry Pis, the lack of support when determining computational power and network characterization leaves much room for improvement in edge environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.