2020
DOI: 10.1109/tifs.2019.2947867
|View full text |Cite
|
Sign up to set email alerts
|

Recycled ADMM: Improving the Privacy and Accuracy of Distributed Algorithms

Abstract: Alternating direction method of multiplier (ADMM) is a powerful method to solve decentralized convex optimization problems. In distributed settings, each node performs computation with its local data and the local results are exchanged among neighboring nodes in an iterative fashion. During this iterative process the leakage of data privacy arises and can accumulate significantly over many iterations, making it difficult to balance the privacy-accuracy tradeoff. We propose Recycled ADMM (R-ADMM), where a linea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
97
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 51 publications
(101 citation statements)
references
References 27 publications
3
97
0
1
Order By: Relevance
“…Therefore, distributed machine learning helps to reduce computational burden and improves both robustness and scalability of data processing. As pointed out in recent studies [1], [2], existing approaches to decentralizing an optimization problem mainly consist of subgradient-based algorithms [3], [4], alternating direction method of multipliers (ADMM) based algorithms [5]- [8], and composite of sub-gradient descent and ADMM [9]. It has been shown that ADMM-based algorithms can converge at the rate of O(1/t) while subgradient-based algorithms typically converge at the rate of O(1/ √ t), where t is the number of iterations [10].…”
Section: Introductionmentioning
confidence: 59%
“…Therefore, distributed machine learning helps to reduce computational burden and improves both robustness and scalability of data processing. As pointed out in recent studies [1], [2], existing approaches to decentralizing an optimization problem mainly consist of subgradient-based algorithms [3], [4], alternating direction method of multipliers (ADMM) based algorithms [5]- [8], and composite of sub-gradient descent and ADMM [9]. It has been shown that ADMM-based algorithms can converge at the rate of O(1/t) while subgradient-based algorithms typically converge at the rate of O(1/ √ t), where t is the number of iterations [10].…”
Section: Introductionmentioning
confidence: 59%
“…D. Private ADMM [26] & Private M-ADMM [27] In private ADMM [26], the noise is added either to the updated primal variable before broadcasting to its neighbors (primal variable perturbation), or to the dual variable before updating its primal variable using (8) (dual variable perturbation). The privacy property is only evaluated for a single node and a single iteration, both methods cannot balance the privacy-utility tradeoff very well if consider the total privacy loss.…”
Section: Conventional Admmmentioning
confidence: 99%
“…Following the same pre-processing steps as in [27], the final data includes 45,223 individuals, each represented as a 105-dimensional vector of norm at most 1. We will use as loss function the logistic loss L (z) = log(1 + exp(−z)), with |L | ≤ 1 and L ≤ c 1 = 1 4 .…”
Section: Numerical Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition to its original version, many variations has been presented, such as [23,24] and [12]. Several ADMM based differentially private algorithms have been presented, for example, [25] applied objective perturbation technique on the original ADMM problem, [26] and [27] applied output and objective perturbation technique, and [28] applied gradient perturbation technique on ADMM-based algorithms in distributed settings.…”
Section: Related Workmentioning
confidence: 99%