We develop a new distributed algorithm to solve the ridge regression problem with feature partitioning of the observation matrix. The proposed algorithm, named D-Ridge, is based on the alternating direction method of multipliers (ADMM) and estimates the parameters when the observation matrix is distributed among different agents with feature (or vertical) partitioning. We formulate the associated ridge regression problem as a distributed convex optimization problem and utilize the ADMM to obtain an iterative solution. Numerical results demonstrate that D-Ridge converges faster than its diffusion-based contender does.
We develop a privacy-preserving distributed algorithm to minimize a regularized empirical risk function when the first-order information is not available and data is distributed over a multi-agent network. We employ a zeroth-order method to minimize the associated augmented Lagrangian function in the primal domain using the alternating direction method of multipliers (ADMM). We show that the proposed algorithm, named distributed zeroth-order ADMM (D-ZOA), has intrinsic privacy-preserving properties. Most existing privacy-preserving distributed optimization/estimation algorithms exploit some perturbation mechanism to preserve privacy, which comes at the cost of reduced accuracy. Contrarily, by analyzing the inherent randomness due to the use of a zeroth-order method, we show that D-ZOA is intrinsically endowed with (ϵ, δ)−differential privacy. In addition, we employ the moments accountant method to show that the total privacy leakage of D-ZOA grows sublinearly with the number of ADMM iterations. D-ZOA outperforms the existing differentially-private approaches in terms of accuracy while yielding similar privacy guarantee. We prove that D-ZOA reaches a neighborhood of the optimal solution whose size depends on the privacy parameter. The convergence analysis also reveals a practically important trade-off between privacy and accuracy. Simulation results verify the desirable privacypreserving properties of D-ZOA and its superiority over the stateof-the-art algorithms as well as its network-wide convergence.
We propose a new distributed algorithm to solve the total least-squares (TLS) problem when data are distributed over a multi-agent network. To develop the proposed algorithm, named distributed ADMM TLS (DA-TLS), we reformulate the TLS problem as a parametric semidefinite program and solve it using the alternating direction method of multipliers (ADMM). Unlike the existing consensus-based approaches to distributed TLS estimation, DA-TLS does not require careful tuning of any design parameter. Numerical experiments demonstrate that the DA-TLS converges to the centralized solution significantly faster than the existing consensus-based TLS algorithms.
We develop a new distributed algorithm to solve a learning problem with non-smooth objective functions when data are distributed over a multi-agent network. We employ a zerothorder method to minimize the associated augmented Lagrangian in the primal domain using the alternating direction method of multipliers (ADMM) to develop the proposed algorithm, named distributed zeroth-order based ADMM (D-ZOA). Unlike most existing algorithms for non-smooth optimization, which rely on calculating subgradients or proximal operators, D-ZOA only requires function values to approximate gradients of the objective function. Convergence of D-ZOA to the centralized solution is confirmed via theoretical analysis and simulation results.
We develop a privacy-preserving distributed algorithm to minimize a regularized empirical risk function when the first-order information is not available and data is distributed over a multi-agent network. We employ a zeroth-order method to minimize the associated augmented Lagrangian function in the primal domain using the alternating direction method of multipliers (ADMM). We show that the proposed algorithm, named distributed zeroth-order ADMM (D-ZOA), has intrinsic privacy-preserving properties. Unlike the existing privacypreserving methods based on the ADMM where the primal or the dual variables are perturbed with noise, the inherent randomness due to the use of a zeroth-order method endows D-ZOA with intrinsic differential privacy. By analyzing the perturbation of the primal variable, we show that the privacy leakage of the proposed D-ZOA algorithm is bounded. In addition, we employ the moments accountant method to show that the total privacy leakage grows sublinearly with the number of ADMM iterations. D-ZOA outperforms the existing differentially private approaches in terms of accuracy while yielding the same privacy guarantee. We prove that D-ZOA converges to the optimal solution at a rate of O(1/M ) where M is the number of ADMM iterations. The convergence analysis also reveals a practically important tradeoff between privacy and accuracy. Simulation results verify the desirable privacy-preserving properties of D-ZOA and its superiority over a state-of-the-art algorithm as well as its network-wide convergence to the optimal solution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.