2017
DOI: 10.1109/tie.2016.2588463
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Learning of Predictive Structures From Multiple Tasks Over Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 27 publications
0
8
0
Order By: Relevance
“…For the Bayesian learning problem in wireless sensor networks, references [10,13,19] systematically studied how to solve the Bayesian learning problem by using the variational Bayes method in a distributed environment. For the problem of Bayesian inference and estimation on the network, reference [10] proposed a general framework of distributed variational seeing algorithm for conjugate exponential family models.…”
Section: Distributed Bayesian Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…For the Bayesian learning problem in wireless sensor networks, references [10,13,19] systematically studied how to solve the Bayesian learning problem by using the variational Bayes method in a distributed environment. For the problem of Bayesian inference and estimation on the network, reference [10] proposed a general framework of distributed variational seeing algorithm for conjugate exponential family models.…”
Section: Distributed Bayesian Learningmentioning
confidence: 99%
“…Intelligent nodes are terms such as autonomous agents, which mainly include sensors, processors, actuators, and etc. With the aid of algebraic graph theory, theses nodes are not only communication by network topology, but also allow us to exchange information within their neighborhood and to accomplish network global tasks by online learning [13]. So the IWSN node with distributed manner have "network awareness", which constructed complicated intelligent network systems.…”
Section: Introductionmentioning
confidence: 99%
“…In [22], a communication-efficient estimator based on the debiased lasso is presented. Reference [23] learned a shared predictive structure for tasks by extending [12] to a distributed setting. Following the assumptions of MTFL in [4], two communication-efficient subspace pursuit algorithms are provided in [24].…”
Section: Introductionmentioning
confidence: 99%
“…Different from the Jacobian type method, the variables are sequentially optimized for Gauss-Seidel method. In [23], the optimization method that integrates block coordinate descent method (BCD) with the inexact ADMM is utilized for distributed learning. While a BCD for regularized multi-convex optimization is considered in [39].…”
Section: Introductionmentioning
confidence: 99%
“…In [20], a communication-efficient estimator based on the debiased lasso is presented. Reference [21] learned a shared predictive structure for tasks by extending [12] to a distributed setting. Following the assumptions of MTFL in [4], two communication-efficient subspace pursuit algorithms are provided in [22].…”
Section: Introductionmentioning
confidence: 99%