2020
DOI: 10.1109/tpami.2019.2906207
|View full text |Cite
|
Sign up to set email alerts
|

Learning of Gaussian Processes in Distributed and Communication Limited Systems

Abstract: It is of fundamental importance to find algorithms obtaining optimal performance for learning of statistical models in distributed and communication limited systems. Aiming at characterizing the optimal strategies, we consider learning of Gaussian Processes (GPs) in distributed systems as a pivotal example. We first address a very basic problem: how many bits are required to estimate the inner-products of Gaussian vectors across distributed machines? Using information theoretic bounds, we obtain an optimal sol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 22 publications
0
8
0
Order By: Relevance
“…Some basic problems in machine learning such as classification, regression, hypothesis testing, etc. in distributed fashion are studied in [3], [33], [34]. Raginsky in [33] studied the classification and regression problem in distributed settings.…”
Section: Distributed Statistical Inferencementioning
confidence: 99%
See 2 more Smart Citations
“…Some basic problems in machine learning such as classification, regression, hypothesis testing, etc. in distributed fashion are studied in [3], [33], [34]. Raginsky in [33] studied the classification and regression problem in distributed settings.…”
Section: Distributed Statistical Inferencementioning
confidence: 99%
“…Many learning algorithms can be modified to run distributively at several machines to perform a learning task. There are many papers that propose distributed (parallel) version of various learning algorithms [1], [2], [3], [4]. However, some learning algorithms could not be efficiently parallelized on distributed data.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…These methods speed up the training process and are able to scale to arbitrarily large datasets. Communication budget constraint is considered in [17] by reducing the dimensionality of transmitted data to approximate the whole dataset. Sparse approximation of full GPR is used in [17] to further relieve the communication overhead.…”
Section: Introductionmentioning
confidence: 99%
“…Communication budget constraint is considered in [17] by reducing the dimensionality of transmitted data to approximate the whole dataset. Sparse approximation of full GPR is used in [17] to further relieve the communication overhead. Notice that the server-client architecture requires each client being wellconnected with the server, and is not robust to the failure of the server.…”
Section: Introductionmentioning
confidence: 99%