2017
DOI: 10.1016/j.automatica.2017.03.016
|View full text |Cite
|
Sign up to set email alerts
|

Differentially private average consensus: Obstructions, trade-offs, and optimal algorithm design

Abstract: This paper studies the multi-agent average consensus problem under the requirement of differential privacy of the agents' initial states against an adversary that has access to all the messages. We first establish that a differentially private consensus algorithm cannot guarantee convergence of the agents' states to the exact average in distribution, which in turn implies the same impossibility for other stronger notions of convergence. This result motivates our design of a novel differentially private Laplaci… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
135
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 232 publications
(139 citation statements)
references
References 30 publications
0
135
0
Order By: Relevance
“…The proof of this lemma is given in the appendix. Lemma 4.2 (Privacy preservation for i ∈ V via a concealed β i ): Consider the modified static average consensus algorithm (3) with a set of locally chosen admissible perturbation signals {f j , g j } N j=1 over a strongly connected and weightbalanced digraph G. Let the knowledge set of the malicious agent 1 include the form of conditions (5) and (6), and also the parameter α that the agents agreed to use. Let agent 1 be the in-neighbor of agent i ∈ V and all the out-neighbors of agent i, i.e., agent 1 knows {y j (t)} j∈N i out+i , t ∈ R ≥0 .…”
Section: B Case 2 Knowledge Setmentioning
confidence: 99%
See 2 more Smart Citations
“…The proof of this lemma is given in the appendix. Lemma 4.2 (Privacy preservation for i ∈ V via a concealed β i ): Consider the modified static average consensus algorithm (3) with a set of locally chosen admissible perturbation signals {f j , g j } N j=1 over a strongly connected and weightbalanced digraph G. Let the knowledge set of the malicious agent 1 include the form of conditions (5) and (6), and also the parameter α that the agents agreed to use. Let agent 1 be the in-neighbor of agent i ∈ V and all the out-neighbors of agent i, i.e., agent 1 knows {y j (t)} j∈N i out+i , t ∈ R ≥0 .…”
Section: B Case 2 Knowledge Setmentioning
confidence: 99%
“…This way the reference value of the agents is guaranteed to stay private but the algorithm does not necessarily converge to the anticipated value. Similarly, in recent years, Nozari, Tallapragada and Cortes [6] also relied on adding zero mean noises to protect the privacy of the agents. However, they develop their noises according to a framework defined based on the concept of differential privacy, which is initially developed in the data science literature [7]- [10].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Several proposals [6], [8], [9] achieve differential privacy by having agents obscure their intermediate states (or values) by adding locally generated noise in a particular synchronous distributed average consensus protocol. Adding such local noises induces a loss in accuracy [9], [16] and there is an inherent trade-off between privacy and the achievable accuracy (agents are only able to compute an approximation to the exact average value). Schemes in [8], [7] iteratively cancel the noise added over time to preserve the accuracy of the average of all inputs.…”
Section: Introductionmentioning
confidence: 99%
“…We note that some of the above solutions [6], [7], [8], [9] require synchronous execution of the agents, whereas our privacy protocol is asynchronous (refer Section III). Moreover, this is the first paper, to the best of authors' knowledge, to propose a privacy protocol for distributed average consensus on bounded real-value inputs where bounds are apriori known.…”
Section: Introductionmentioning
confidence: 99%