The need for effective and fair resource allocation in cloud computing has been identified in the literature and in industrial contexts for a while. Cloud computing seen as a promising technology, offers usage-based payment, scalable and on-demand computing resources. However, during the past decade, the growing complexity of the IT world has resulted in making Quality of Service (QoS) in the cloud a challenging subject and an NP-hard problem. Specifically, the fair allocation of resources in the cloud becomes particularly interesting when many users submit several tasks which require multiple resources. Research in this area has been increasing since 2012 by introducing the Dominant Resource Fairness (DRF) algorithm as an initial attempt to solve the fair resource allocation problem in the cloud. Although DRF meets a sort of desirable fairness properties, it has been proven to be inefficient in certain conditions. Noticeably, DRF and other works in its extension are not intuitively fair after all. Those implementations have been unable to utilize all the resources in the system, leaving the system in an imbalanced situation with respect to each specific system resource. In order to address those issues, we propose in this paper a novel algorithm namely a Fully Fair Multi-Resource Allocation Algorithm in Cloud Environments (FFMRA) which allocates resources in a fully fair way considering both dominant and non-dominant shares. The results from the experiments conducted in CloudSim show that FFMRA provides approximately 100% recourse utilization, and distributing them fairly among the users while meeting desirable fairness features.
Cloud computing is a novel paradigm which provides on demand, scalable and pay-as-you-use computing resources in a virtualized form. With cloud computing, users are able to access large pools of resources anywhere without any limitation. In order to use the provided facilities by the cloud in an efficient way, the management of resources is an undeniable fact that should be considered in different aspects. Among all those aspects, resource allocation has received much attentions. Given the fact that the cloud is heterogeneous, the allocation of resources has to become more sophisticated. As a first promising work to deal with that problem, Dominant Resource Fairness (DRF) has been proposed which takes into account dominant shares of users. Although DRF has a sort of desirable fairness properties, it has some limitations that have already been identified in the literature. Unfortunately, DRF and its recent developments are not intuitively fair with respect to various resource demands. In this paper, we propose a Multi-level Fair Dominant Resource Scheduling (MLF-DRS) algorithm as a new allocation model inspired by Max-Min fairness and proportionality. Unlike other works that they equalize dominant shares of different resource types which leads to starvation in the maximization of allocation for some users, our algorithm guarantees that each user receives the resources they desire for based on dominant shares. As can be deducted from the mathematical proofs, MLF-DRS provides a full utilization of resources and meets some of the desirable fair allocation properties and it is applicable to be used in a naïve extension form in the presence of multiple servers as well.
Containerization has become a new approach that facilitates application deployment and delivers scalability, productivity, security, and portability. As a first promising platform, Docker was proposed in 2013 to automate the deployment of applications. There are many advantages of Docker for delivering cloud native services. However, its widespread use has revealed problems such as performance overhead. In order to deal with those problems, Kubernetes was introduced in 2015 as a container orchestration platform to simplify the management of containers. Kubernetes simplifies managing a large scale number of docker containers, however, the fairness is a missing point in the Kubernetes that has been applied in other platforms such as Apache Hadoop, YARN and Mesos. Assigning resource limits fairly among the pods in kubernetes becomes a challenging issue as some applications may require intensive resources such as CPU and memory that should be maximized to satisfy them. In order to do that, in this paper, we practice a novel way to assign resource limits fairly among the pods in the Kubernetes environment.
Cloud computing is a paradigm that has become popular in recent decade. The flexibility, scalability, elasticity, inexpensive and unlimited use of resources have made the cloud an efficient and valuable infrastructure for many organizations to perform their computational operations. Specifically, the elasticity feature of cloud computing leads to the increase of complexity of this technology. Considering the emergence of new technologies and user demands, the existing solutions are not suitable to satisfy the huge volume of data and user requirements. Moreover, certain quality requirements that have to be met for efficient resource provisioning such as Quality of Service (QoS) is an obstacle to scalability. Hence, autonomic computing has emerged as a highly dynamic solution for complex administration issues that goes beyond simple automation to self-learning and highly-adaptable systems. Therefore, the combination of cloud computing and autonomics known as Autonomic Cloud Computing (ACC) seems a natural progression for both areas. This paper is an overview of the latest conducted research in ACC and the corresponding software engineering techniques. Additionally, existing autonomic applications, methods and their use cases in cloud computing environment are also investigated.
Task scheduling in cloud computing is considered as a significant issue that has attracted much attention over the last decade. In cloud environments, users expose considerable interest in submitting tasks on multiple Resource types. Subsequently, finding an optimal and most efficient server to host users' tasks seems a fundamental concern. Several attempts have suggested various algorithms, employing Swarm optimization and heuristics methods to solve the scheduling issues associated with cloud in a multi-resource perspective. However, these approaches have not considered the equalization of the number of dominant resources on each specific resource type. This substantial gap leads to unfair allocation, SLA degradation and resource contention. To deal with this problem, in this paper we propose a novel task scheduling mechanism called MRFS. MRFS employs Lagrangian multipliers to locate tasks in suitable servers with respect to the number of dominant resources and maximum resource availability. To evaluate MRFS, we conduct time-series experiments in the cloudsim driven by randomly generated workloads. The results show that MRFS maximizes per-user utility function by %15-20 in FFMRA compared to FFMRA in absence of MRFS. Furthermore, the mathematical proofs confirm that the sharing-incentive, and Pareto-efficiency properties are improved under MRFS.
The use of mature, reliable, and validated solutions can save significant time and cost when introducing new technologies to companies. Reference Architectures represent such best-practice techniques and have the potential to increase the speed and reliability of the development process in many application domains. One area where Reference Architectures are increasingly utilized is cloud-based systems. Exploiting the high-performance computing capability offered by clouds, while keeping sovereignty and governance of proprietary information assets can be challenging. This paper explores how Reference Architectures can be applied to overcome this challenge when developing cloud-based applications. The presented approach was developed within the DIGITbrain European project, which aims at supporting small and medium-sized enterprises (SMEs) and mid-caps in realizing smart business models called Manufacturing as a Service, via the efficient utilization of Digital Twins. In this paper, an overview of Reference Architecture concepts, as well as their classification, specialization, and particular application possibilities are presented. Various data management and potentially spatially detached data processing configurations are discussed, with special attention to machine learning techniques, which are of high interest within various sectors, including manufacturing. A framework that enables the deployment and orchestration of such overall data analytics Reference Architectures in clouds resources is also presented, followed by a demonstrative application example where the applicability of the introduced techniques and solutions are showcased in practice.
The allocation of multiple types of resources fairly and efficiently has become a substantial concern in state-ofthe-art computing systems. Accordingly, the rapid growth of cloud computing has highlighted the importance of resource management as a complicated and NP-hard problem. Unlike traditional frameworks, in modern data centers, incoming jobs pose demand profiles, including diverse sets of resources such as CPU, memory, and bandwidth across multiple servers. Accordingly, the fair distribution of resources, respecting such heterogeneity appears to be a challenging issue. Furthermore, the efficient use of resources as well as fairness, establish trade-off that renders a higher degree of satisfaction for both users and providers. Dominant Resource Fairness (DRF) has been introduced as an initial attempt to address fair resource allocation in multi-resource cloud computing infrastructures. Dozens of approaches have been proposed to overcome existing shortcomings associated with DRF. Although all those developments have satisfied several desirable fairness features, there are still substantial gaps. Firstly, it is not clear how to measure the fair allocation of resources among users. Secondly, no particular trade-off considers non-dominant resources in allocation decisions. Thirdly, those allocations are not intuitively fair as some users are not able to maximize their allocations. In particular, the recent approaches have not considered the aggregate resource demands concerning dominant and non-dominant resources across multiple servers. These issues lead to an uneven allocation of resources over numerous servers which is an obstacle against utility maximization for some users with dominant resources. Correspondingly, in this paper, a resource allocation algorithm called H-FFMRA is proposed to distribute resources with fairness across servers and users, considering dominant and non-dominant resources. The experiments show that H-FFMRA achieves approximately %20 improvements on fairness as well as full utilization of resources compared to DRF in multi-server settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.