2018
DOI: 10.1007/978-3-319-96983-1_32
|View full text |Cite
|
Sign up to set email alerts
|

Robust Decentralized Mean Estimation with Limited Communication

Abstract: Abstract. Mean estimation, also known as average consensus, is an important computational primitive in decentralized systems. When the average of large vectors has to be computed, as in distributed data mining applications, reducing the communication cost becomes a key design goal. One way of reducing communication cost is to add dynamic stateful encoders and decoders to traditional mean estimation protocols. In this approach, each element of a vector message is encoded in a few bits (often only one bit) and d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…As for future work, the most promising direction is the design and evaluation of more sophisticated compression techniques [5] for both federated and gossip learning. Also, in both cases, there is a lot of opportunity to optimize the communication pattern by introducing asynchrony to federated learning, or adding flow control to gossip learning [6].…”
Section: Discussionmentioning
confidence: 99%
“…As for future work, the most promising direction is the design and evaluation of more sophisticated compression techniques [5] for both federated and gossip learning. Also, in both cases, there is a lot of opportunity to optimize the communication pattern by introducing asynchrony to federated learning, or adding flow control to gossip learning [6].…”
Section: Discussionmentioning
confidence: 99%
“…Although the implementation of such protocol is relatively simple, it is differently implemented in different systems. Some famous examples of efficient gossiping protocols include the Push-Sum protocol [69], the Push-Flow algorithm [70], and different versions of the Push-Pull averaging protocol [71]. Furthermore, we found that its application in FoBSim was useful, when the PoW CA is used in a multiprocessing scenario, with a relatively low puzzle difficulty.…”
Section: Gossip Protocolmentioning
confidence: 90%
“…Our compressed push-pull learning algorithm is based on a compressed push-pull averaging protocol that compresses communication using codecs [7]. The nodes perodically train their model on the local data, as well as perform distributed averaging of the models.…”
Section: Compressed Push-pull Learningmentioning
confidence: 99%
“…The techniques used for compression and ensuring sum preservation are largly unchanged from [7] and we do not go into great detail concerning these. One difference worth mentioning, though, is that we omitted the flow compensation component, because it had a negative influence on the machine learning performance.…”
Section: Compressed Push-pull Learningmentioning
confidence: 99%
See 1 more Smart Citation