Driven by increasing in the demand for cloud computing, cloud providers are constantly seeking configuration mechanisms designed to simply install a reliable and easy-to-manage cloud architecture-similar to installing an operating system on a computer using a thumb drive. Accordingly, cloud software components can be packaged into lightweight and portable containers, and then be easily deployed and managed through orchestration tools such as Kubernetes. Similarly, load balancers can also be deployed in containerized cloud environments and managed as a container, simplifying the process of scaling in or out according to the network status or amounts of incoming traffic. In this study, we implemented a containerized high-performance load balancer that distributes traffic using eBPF/XDP within the Linux kernel, which can easily be managed via Kubernetes. We compared the performance of the proposed load balancer with iptables DNAT and loopback based on the RFC2544 performance standard, and also performed tests simulating realworld traffic patterns by using IMIX traffic streams. Our experimental results indicate that the throughput performance of the proposed load balancer is considerably better than that of iptables DNAT; the difference in performance increased with decreasing packet size. The difference in performance between the loopback (representing the theoretical maximum performance limit) and the proposed load balancer was minimal.
Driven by increasing in the demand for cloud computing, cloud providers are constantly seeking configuration mechanisms designed to simply install a reliable and easy-to-manage cloud architecture-similar to installing an operating system on a computer using a thumb drive. Accordingly, cloud software components can be packaged into lightweight and portable containers, and then be easily deployed and managed through orchestration tools such as Kubernetes. Similarly, load balancers can also be deployed in containerized cloud environments and managed as a container, simplifying the process of scaling in or out according to the network status or amounts of incoming traffic. In this study, we implemented a containerized high-performance load balancer that distributes traffic using eBPF/XDP within the Linux kernel, which can easily be managed via Kubernetes. We compared the performance of the proposed load balancer with iptables DNAT and loopback based on the RFC2544 performance standard, and also performed tests simulating realworld traffic patterns by using IMIX traffic streams. Our experimental results indicate that the throughput performance of the proposed load balancer is considerably better than that of iptables DNAT; the difference in performance increased with decreasing packet size. The difference in performance between the loopback (representing the theoretical maximum performance limit) and the proposed load balancer was minimal.
“…[24]. Ключевой же проблемой в управлении StatefulSet является отсутствие механизмов живой миграции и автоматического создания замены вышедшего из строя StatefulSet [25,26]. Поэтому решение проблемы сохранения состояния при использовании технологии контейнеризации является активной областью исследований.…”
Digital twins of processes and devices use information from sensors to synchronize their state with the entities of the physical world. The concept of stream computing enables effective processing of events generated by such sensors. However, the need to track the state of an instance of the object leads to the impossibility of organizing instances of digital twins as stateless services. Another feature of digital twins is that several tasks implemented on their basis require the ability to respond to incoming events at near-real-time speed. In this case, the use of cloud computing becomes unacceptable due to high latency. Fog computing manages this problem by moving some computational tasks closer to the data sources. One of the recent solutions providing the development of loosely coupled distributed systems is a Microservice approach, which implies the organization of the distributed system as a set of coherent and independent services interacting with each other using messages. The microservice is most often isolated by utilizing containers to overcome the high overheads of using virtual machines. The main problem is that microservices and containers together are stateless by nature. The container technology still does not fully support live container migration between physical hosts without data loss. It causes challenges in ensuring the uninterrupted operation of services in fog computing environments. Thus, an essential challenge is to create a containerized stateful stream processing based microservice to support digital twins in the fog computing environment. Within the scope of this article, we study live stateful stream processing migration and how to redistribute computational activity across cloud and fog nodes using Kafka middleware and its Stream DSL API.
“…In Reference [ 20 ], a method was proposed to improve the availability of stateful services through common storage for the active and standby pods, but node-level fault handling was not considered.…”
Section: State Of the Art: Fault Detection And Recovery Mechanismsmentioning
confidence: 99%
“…Abdollahi et al [ 20 ] proposed a method of ensuring availability based on appropriate storage management. The service was configured as a redundancy model, and the architecture was proposed to share data via Persistent Volume (PV).…”
Section: Introductionmentioning
confidence: 99%
“…A state controller was proposed on the existing architecture to configure two pods as an active and standby model and shared one PV to share the data was designed. As a perspective of availability, the proposal in Reference [ 20 ] is also considered, but further research is needed on how to guarantee availability starting from node fault to reduce the service outage, due to node fault.…”
The container-based cloud is used in various service infrastructures as it is lighter and more portable than a virtual machine (VM)-based infrastructure and is configurable in both bare-metal and VM environments. The Internet-of-Things (IoT) cloud-computing infrastructure is also evolving from a VM-based to a container-based infrastructure. In IoT clouds, the service availability of the cloud infrastructure is more important for mission-critical IoT services, such as real-time health monitoring, vehicle-to-vehicle (V2V) communication, and industrial IoT, than for general computing services. However, in the container environment that runs on a VM, the current fault detection method only considers the container’s infra, thus limiting the level of availability necessary for the performance of mission-critical IoT cloud services. Therefore, in a container environment running on a VM, fault detection and recovery methods that consider both the VM and container levels are necessary. In this study, we analyze the fault-detection architecture in a container environment and designed and implemented a Fast Fault Detection Manager (FFDM) architecture using OpenStack and Kubernetes for realizing fast fault detection. Through performance measurements, we verified that the FFDM can improve the fault detection time by more than three times over the existing method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.