2014
DOI: 10.1109/tsp.2013.2296271
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Decision-Making Over Adaptive Networks

Abstract: In distributed processing, agents generally collect data generated by the same underlying unknown model (represented by a vector of parameters) and then solve an estimation or inference task cooperatively.In this paper, we consider the situation in which the data observed by the agents may have risen from two different models. Agents do not know beforehand which model accounts for their data and the data of their neighbors. The objective for the network is for all agents to reach agreement on which model to tr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
37
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 43 publications
(37 citation statements)
references
References 60 publications
0
37
0
Order By: Relevance
“…Although cluster information is in general not available beforehand, groups within each cluster are available according to Assumption 1. Using this prior information, agents can instead focus on solving the following problem based on partitioning by groups rather than by clusters: (19) with one parameter vector for each group . In the extreme case when prior clustering information is totally absent, groups will collapse into singletons and problem (19) will reduce to the individual non-cooperative case with each agent running its own stochastic-gradient algorithm to minimize its cost function.…”
Section: Proposed Algorithm and Main Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Although cluster information is in general not available beforehand, groups within each cluster are available according to Assumption 1. Using this prior information, agents can instead focus on solving the following problem based on partitioning by groups rather than by clusters: (19) with one parameter vector for each group . In the extreme case when prior clustering information is totally absent, groups will collapse into singletons and problem (19) will reduce to the individual non-cooperative case with each agent running its own stochastic-gradient algorithm to minimize its cost function.…”
Section: Proposed Algorithm and Main Resultsmentioning
confidence: 99%
“…This early investigation dealt only with the case of two separate clusters in the network with each cluster interested in one parameter vector. One useful application of this formulation in the context of biological networks was considered in [19], where each agent was assumed to collect data arising from one of two models (e.g., the location of two separate food sources). The agents did not know which model generated their observations and, yet, they needed to reach agreement about which model to follow (i.e., which food source to move towards).…”
Section: Introductionmentioning
confidence: 99%
“…We therefore focus on the implementation of diffusion strategies in this article. In particular, we examine networks where different clusters of agents may be interested in different objectives [10][11][12][13][14][15]. In this case, it is important to develop algorithms that enable the agents to continuously learn which of their neighbors belong to the same cluster and which ones are from different clusters.…”
Section: Introductionmentioning
confidence: 99%
“…This motivates looking for in-network classification algorithms where a minimum amount of information is exchanged among single-hop neighbors. Although several methods have been proposed in the last years that deal with distributed data clustering and classification [2,3,4,5,6,7,8,9,10,11], most of them still assume the presence of a fusion center [5,7], are hardly real-time capable [3] or need a set of prelabelled training data for training beforehand [6,11]. Several distributed adaptive strategies, such as incremental, consensus, and diffusion algorithms have been developed in the last few years.…”
Section: Introductionmentioning
confidence: 99%