2021
DOI: 10.48550/arxiv.2107.04370
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Differential Private Method for Distributed Optimization in Directed Networks via State Decomposition

Abstract: In this paper, we study the problem of consensusbased distributed optimization where a network of agents, abstracted as a directed graph, aims to minimize the sum of all agents' cost functions collaboratively. In existing distributed optimization approaches (Push-Pull/AB) for directed graphs, all agents exchange their states with neighbors to achieve the optimal solution with a constant stepsize, which may lead to the disclosure of sensitive and private information. For privacy preservation, we propose a novel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…• When the objective function is strongly convex and the information-sharing noise is variance-bounded, all agents' estimates converge to the same optimal solution with the convergence rate O k −(1−ϵ) in the mean square sense, where ϵ could be close to 0 infinitely. To the best of our knowledge, this is the first convergence rate result of arriving at the optimal solution, and it is a complement to the convergence rate result of arriving in the optimal solution's neighborhood [17,23,25]. We verify our theoretical results with the numerical example of ridge regression problem.…”
Section: Introductionmentioning
confidence: 65%
See 4 more Smart Citations
“…• When the objective function is strongly convex and the information-sharing noise is variance-bounded, all agents' estimates converge to the same optimal solution with the convergence rate O k −(1−ϵ) in the mean square sense, where ϵ could be close to 0 infinitely. To the best of our knowledge, this is the first convergence rate result of arriving at the optimal solution, and it is a complement to the convergence rate result of arriving in the optimal solution's neighborhood [17,23,25]. We verify our theoretical results with the numerical example of ridge regression problem.…”
Section: Introductionmentioning
confidence: 65%
“…Particularly, setting η = β = 1 − 5/8ϵ and α = 1 − 0.25ϵ, the convergence rate of VRA-GT is O 1 k 1−ϵ , where ϵ can close to zero infinitely. Moreover, Theorem 3 may complement the convergence rate result of arriving in the optimal solution's neighborhood [17,23,25].…”
Section: Convergence Analysis Of Vra-gtmentioning
confidence: 94%
See 3 more Smart Citations