2021
DOI: 10.1109/jiot.2021.3058116
|View full text |Cite
|
Sign up to set email alerts
|

Coded Stochastic ADMM for Decentralized Consensus Optimization With Edge Computing

Abstract: Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones, and vehicles. Due to the limitations of communication costs and security requirements, it is of paramount importance to analyze information in a decentralized manner instead of aggregating data to a fusion center. To train large-scale machine learning models, edge/fog computing is often leveraged as an alternative to centralized learning. We conside… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 34 publications
0
10
0
Order By: Relevance
“…Meanwhile, it is inefficient (sometimes even infeasible) to transmit all data to a central node for analysis. For the reason, distributed machine learning (DML), which stores and processes all or parts of data in different nodes, has attracted significant research interests and applications [ 1 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 ]. There are different methods of implementing DML, i.e., primal method (e.g., distributed gradient descend [ 4 , 7 ], federated learning [ 5 , 6 ]) and primal–dual method (e.g., alternating direction method of multipliers (ADMM)) [ 16 ].…”
Section: Background and Motivationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Meanwhile, it is inefficient (sometimes even infeasible) to transmit all data to a central node for analysis. For the reason, distributed machine learning (DML), which stores and processes all or parts of data in different nodes, has attracted significant research interests and applications [ 1 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 ]. There are different methods of implementing DML, i.e., primal method (e.g., distributed gradient descend [ 4 , 7 ], federated learning [ 5 , 6 ]) and primal–dual method (e.g., alternating direction method of multipliers (ADMM)) [ 16 ].…”
Section: Background and Motivationsmentioning
confidence: 99%
“…For the global consensus, network coding can be used to reduce the communication loads and increase reliability. In [ 15 ], we preliminarily investigated how coding (MDS codes) can be used in local optimization (step (a)). A more detailed introduction is given as follows.…”
Section: Coding For Admmmentioning
confidence: 99%
“…The global variable 𝑧 𝑘+1 and gradient estimation 𝜇 𝑘+1 get updated at agent 𝑖 𝑘 and passed as tokens to its neighbour 𝑖 𝑘+1 through Hamiltonian cycle. When {𝜂 𝑘 = 0|𝑘 = 1, 2, ...}, the algorithm reduces to the vanilla stochastic incremental ADMM (sI-ADMM) as in [45]. Comparing with sI-ADMM, asI-ADMM constructs stochastic gradient 𝜇 𝑘+1 based on the information 𝐺 𝑖 𝑘 (𝜽 𝑘 𝑖 𝑘 ; 𝜻 𝑘 𝑖 𝑘 ) and 𝜇 𝑘 , while sI-ADMM only considers the current mini-batch gradient.…”
Section: Contributionsmentioning
confidence: 99%
“…Distributed optimization algorithms have two main classes: distributed primal algorithm [4]- [7] and distributed primaldual algorithm [8]- [12]. In [4], the authors proposed fast distributed gradient algorithms to minimize the sum of individual cost function.…”
Section: Introductionmentioning
confidence: 99%
“…A variant ADMM algorithm was proposed in [11], which has less communication overhead but with the same convergence rate of standard ADMM. To further reduce the communication overhead, the authors in [12] investigated coding for stochastic incremental distributed primal-dual algorithm. However, the above distributed primal-dual works [8]- [12] all ignored the affect of wireless factors (such as transmission error) when implementing distributed primal-dual algorithm over wireless communications.…”
Section: Introductionmentioning
confidence: 99%