After a disaster happens, effective communication and information sharing between emergency response team members play a crucial role in a successful disaster response phase. With dedicated roles and missions are assigned to responders, role-based communication is a pivotal feature that an emergency communication network needs to support. Previous works have shown that Named Data Networking (NDN) has many advantages over traditional IP-based networks in providing this feature. However, these studies are only simulation-based. To apply NDN in disaster scenarios, real implementation of a deployment architecture over existing infrastructure during the disaster should be considered. Not only should it ensure efficient emergency communication, but the architecture should deal with other disaster-related challenges such as responder mobility, intermittent network, and replacement possibility due to disaster damage. In this paper, we designed and implemented an NDN-based disaster response support system over Edge Computing infrastructure with KubeEdge as the chosen edge platform to solve the above issues. Our proof-of-concept system performance shows that the architecture achieved efficient role-based communication support, fast mobility handover duration, quick network convergence time in case of node replacement, and loss-free information exchange between responders and the management center on the cloud.
In edge computing, scheduling heterogeneous workloads with diverse resource requirements is challenging. Besides limited resources, the servers may be overwhelmed with computational tasks, resulting in lengthy task queues and congestion occasioned by unusual network traffic patterns. Additionally, Internet of Things (IoT)/Edge applications have different characteristics coupled with performance requirements, which become determinants if most edge applications can both satisfy deadlines and each user’s QoS requirements. This study aims to address these restrictions by proposing a mechanism that improves the cluster resource utilization and Quality of Service (QoS) in an edge cloud cluster in terms of service time. Containerization can provide a way to improve the performance of the IoT-Edge cloud by factoring in task dependencies and heterogeneous application resource demands. In this paper, we propose STaSA, a service time aware scheduler for the edge environment. The algorithm automatically assigns requests onto different processing nodes and then schedules their execution under real-time constraints, thus minimizing the number of QoS violations. The effectiveness of our scheduling model is demonstrated through implementation on KubeEdge, a container orchestration platform based on Kubernetes. Experimental results show significantly fewer violations in QoS during scheduling and improved performance compared to the state of the art.
One of the main challenges in deploying container service is providing the scalability to satisfy the service performance and avoid resource wastage. To deal with this challenge, Kubernetes provides two kinds of scaling mode: vertical and horizontal. Several existing autoscaling methods make efforts to improve the default autoscalers in Kubernetes; however, most of these works only focus on one scaling mode at the same time, which results in some limitations. Only horizontal scaling may lead to low utilization of containers due to the fixed amount of resources for each instance, especially in the lowrequest period. In contrast, only vertical scaling may not ensure the quality of service (QoS) requirements in case of bursty workload due to reaching the upper limit. Besides, it is also necessary to provide burst identification for auto-scalers to guarantee service performance. This paper proposes a hybrid autoscaling method with burst awareness for containerized applications. This new approach considers a combination of both vertical and horizontal abilities to satisfy the QoS requirement while optimizing the utilization of containers. Our proposal uses a predictive method based on the machine learning technique to predict the future demand of the application and combines it with a burst identification module, which makes scaling decisions more effective. Experimental results show an enhancement in maintaining the response time below the QoS constraint whereas remaining high utilization of the deployment compared with existing baseline methods in single scaling mode.INDEX TERMS Cloud computing, Kubernetes, autoscaling, machine learning, workload forecasting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.