2017
DOI: 10.48550/arxiv.1705.02627
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning of Gaussian Processes in Distributed and Communication Limited Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Some basic problems in machine learning such as classification, regression, hypothesis testing, etc. in distributed fashion are studied in [3], [33], [34]. Raginsky in [33] studied the classification and regression problem in distributed settings.…”
Section: Distributed Statistical Inferencementioning
confidence: 99%
See 2 more Smart Citations
“…Some basic problems in machine learning such as classification, regression, hypothesis testing, etc. in distributed fashion are studied in [3], [33], [34]. Raginsky in [33] studied the classification and regression problem in distributed settings.…”
Section: Distributed Statistical Inferencementioning
confidence: 99%
“…Many learning algorithms can be modified to run distributively at several machines to perform a learning task. There are many papers that propose distributed (parallel) version of various learning algorithms [1], [2], [3], [4]. However, some learning algorithms could not be efficiently parallelized on distributed data.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Particularly, due to the product of experts, the aggregation scheme derives a factorized marginal likelihood for efficient training; and then it combines the experts' posterior distributions according to a certain aggregation criterion. In comparison to sparse approximations, the aggregation models (i) operate directly on the full training data, (ii) require no additional inducing or variational parameters and (iii) distribute the computations on individual experts for straightforward parallelization (Tavassolipour et al, 2017), thus scaling them to arbitrarily large training data. In comparison to typical local GPs (Snelson & Ghahramani, 2007;Park et al, 2011), the aggregations smooth out the ugly discontinuity by the product of posterior distributions from GP experts.…”
Section: Introductionmentioning
confidence: 99%