2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP) 2015
DOI: 10.1109/mlsp.2015.7324358
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive regularized diffusion adaptation over multitask networks

Abstract: The focus of this paper is on multitask learning over adaptive networks where different clusters of nodes have different objectives. We propose an adaptive regularized diffusion strategy using Gaussian kernel regularization to enable the agents to learn about the objectives of their neighbors and to ignore misleading information. In this way, the nodes will be able to meet their objectives more accurately and improve the performance of the network. Simulation results are provided to illustrate the performance … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 13 publications
(17 citation statements)
references
References 17 publications
0
17
0
Order By: Relevance
“…Algorithms 1 and 2 employ the same aggregation step in (29) and (45). Node k combines the intermediate estimates of its neighbors in the common subspace Θ without affecting the local contribution in the complementary subspace Θ ⊥ .…”
Section: B Node-specific Subspace Constraints With Norm-bounded Projmentioning
confidence: 99%
See 1 more Smart Citation
“…Algorithms 1 and 2 employ the same aggregation step in (29) and (45). Node k combines the intermediate estimates of its neighbors in the common subspace Θ without affecting the local contribution in the complementary subspace Θ ⊥ .…”
Section: B Node-specific Subspace Constraints With Norm-bounded Projmentioning
confidence: 99%
“…They derive a closedform expression of the proximal operator, and introduce a strategy that also allows each node to automatically set its inter-cluster cooperation weights. The works in [29], [30] propose alternative node clustering strategies. In a second scenario, it is assumed that there are parameters of global interest to all nodes in the network, a collection of parameters of common interest within sub-groups of nodes, and a set of parameters of local interest at each node.…”
mentioning
confidence: 99%
“…Incorporating this symmetric distribution and considering the spatial correlation of P300 in the system, motivates us to form a cooperative PF to enhance the accuracy of the tracking and estimation process. Cooperative approaches are of growing interest in problems with interconnected nodes due to their robustness, enhanced performance, and simplicity [39][40][41][42]. In these approaches, each node acquires its own observation (the P300 waveform here) and process it to track the unknown underlying parameters.…”
Section: Cooperative Particle Filteringmentioning
confidence: 99%
“…[9][10][11][12][13][14][15][16][17] Solving cooperative learning problems that include multiple objectives is challenging because cooperation between agents with different objectives may lead to disastrous results. Nevertheless, there are important situations where agents in the network are interested in multiple objectives that are different from each other.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, there are important situations where agents in the network are interested in multiple objectives that are different from each other. [9][10][11][12][13][14][15][16][17] Solving cooperative learning problems that include multiple objectives is challenging because cooperation between agents with different objectives may lead to disastrous results. 16,17 One useful way to extract similarities among objectives is to formulate optimization problems based on information theoretic learning cost functions.…”
Section: Introductionmentioning
confidence: 99%