2022
DOI: 10.3390/e24091284
|View full text |Cite
|
Sign up to set email alerts
|

Coding for Large-Scale Distributed Machine Learning

Abstract: This article aims to give a comprehensive and rigorous review of the principles and recent development of coding for large-scale distributed machine learning (DML). With increasing data volumes and the pervasive deployment of sensors and computing machines, machine learning has become more distributed. Moreover, the involved computing nodes and data volumes for learning tasks have also increased significantly. For large-scale distributed learning systems, significant challenges have appeared in terms of delay,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…The distributed models process all or part of the data at different nodes [ 3 , 4 ]. A solution in which all the data are simultaneously aggregated and stored in a single set is both inefficient and often impossible to apply [ 5 ]. Therefore, most research papers have proposed a collaborative solution without data aggregation.…”
Section: Introductionmentioning
confidence: 99%
“…The distributed models process all or part of the data at different nodes [ 3 , 4 ]. A solution in which all the data are simultaneously aggregated and stored in a single set is both inefficient and often impossible to apply [ 5 ]. Therefore, most research papers have proposed a collaborative solution without data aggregation.…”
Section: Introductionmentioning
confidence: 99%
“…The topics touched upon include a multi-layer grant-free transmission method [ 1 ], a direct transform-coding approach that maps the delay-Doppler domain to the time domain [ 2 ], degree-of-freedom bounds for multi-antenna, multi-user, and frequency-selective interference channels with an instantaneous relay with or without coordination [ 3 ], new coded caching methods to reduce latency with user cooperation and simultaneous transmission [ 4 ], and a low-resolution downlink precoding method for multi-input single-output channels with orthogonal frequency-division multiplexing [ 5 ]. Furthermore, machine learning methods are discussed in the context of knowledge graphs for semantic communications [ 6 ] and in a review of the state-of-the-art coding methods for large-scale distributed machine learning [ 7 ]. Focusing on coding theory over rings, a new weight that extends the traditional Hamming weight used for algebraic structures is proposed and its properties are analyzed in [ 8 ].…”
mentioning
confidence: 99%