Directed Acyclic Graph (DAG) scheduling in a heterogeneous environment is aimed at assigning the on-the-fly jobs to a cluster of heterogeneous computing executors in order to minimize the makespan while meeting all requirements of scheduling. The problem gets more attention than ever since the rapid development of heterogeneous cloud computing. A little reduction of makespan of DAG scheduling could both bring huge profits to the service providers and increase the level of service of users. Although DAG scheduling plays an important role in cloud computing industries, existing solutions still have huge room for improvement, especially in making use of topological dependencies between jobs. In this paper, we propose a task-duplication based learning algorithm, called Lachesis, for the distributed DAG scheduling problem. In our approach, it first perceives the topological dependencies between jobs using a specially designed graph convolutional network (GCN) to select the most likely task to be executed. Then the task is assigned to a specific executor with the consideration of duplicating all its precedent tasks according to a sophisticated heuristic method. We have conducted extensive experiments over standard workload data to evaluate our solution. The experimental results suggest that the proposed algorithm can achieve at most 26.7% reduction of makespan and 35.2% improvement of speedup ratio over seven strong baseline algorithms, including state-of-the-art heuristics methods and a variety of deep reinforcement learning based algorithms.
Statistical heterogeneity is a root cause of tension among accuracy, fairness, and robustness of federated learning (FL), and is key in paving a path forward. Personalized federated learning (PFL) is an approach that aims to reduce the impact of statistical heterogeneity by developing personalized models for individual users, while also inherently providing benefits in terms of fairness and robustness. However, existing PFL frameworks focus on improving the performance of personalized models while neglecting the global model. This results in PFL suffering from lower solution accuracy when clients have different kinds of heterogeneous data. Moreover, these frameworks typically achieve sublinear convergence rates and rely on strong assumptions. In this paper, we employ the Moreau envelope as a regularized loss function and propose FLAME, an optimization framework by utilizing the alternating direction method of multipliers (ADMM) to train personalized and global models. Due to the gradient-free nature of ADMM, FLAME alleviates the need for tuning the learning rate during training of the global model. We demonstrate that FLAME can generalize to the existing PFL and FL frameworks. Moreover, we propose a model selection strategy to improve performance in situations where clients have different types of heterogeneous data. Our theoretical analysis establishes the global convergence and two kinds of convergence rates for FLAME under mild assumptions. Specifically, under the assumption of gradient Lipschitz continuity, we obtain a sublinear convergence rate. Further assuming the loss function is lower semicontinuous, coercive, and either real analytic or semialgebraic, we can obtain constant, linear, and sublinear convergence rates under different conditions. We also theoretically demonstrate that FLAME is more robust and fair than the state-of-the-art methods on a class of linear problems. We thoroughly conduct experiments by utilizing six schemes to partition non-i.i.d. data, confirming the performance comparison among state-of-the-art methods. Our experimental findings show that FLAME outperforms state-ofthe-art methods in convergence and accuracy, and it achieves higher test accuracy under various attacks and performs more uniformly across clients in terms of robustness and fairness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.