Algorithms and dynamics over networks often involve randomization, and randomization may result in oscillating dynamics which fail to converge in a deterministic sense. In this paper, we observe this undesired feature in three applications, in which the dynamics is the randomized asynchronous counterpart of a well-behaved synchronous one. These three applications are network localization, PageRank computation, and opinion dynamics. Motivated by their formal similarity, we show the following general fact, under the assumptions of independence across time and linearities of the updates: if the expected dynamics is stable and converges to the same limit of the original synchronous dynamics, then the oscillations are ergodic and the desired limit can be locally recovered via time-averaging.
In this paper we study a novel model of opinion dynamics in social networks, which has two main features. First, agents asynchronously interact in pairs, and these pairs are chosen according to a random process. We refer to this communication model as "gossiping". Second, agents are not completely open-minded, but instead take into account their initial opinions, which may be thought of as their "prejudices". In the literature, such agents are often called "stubborn". We show that the opinions of the agents fail to converge, but persistently undergo ergodic oscillations, which asymptotically concentrate around a mean distribution of opinions. This mean value is exactly the limit of the synchronous dynamics of the expected opinions.
The 0 / 1 -regularized least-squares approach is used to deal with linear inverse problems under sparsity constraints, which arise in mathematical and engineering fields. In particular, multiagent models have recently emerged in this context to describe diverse kinds of networked systems, ranging from medical databases to wireless sensor networks. In this paper, we study methods for solving 0 / 1 -regularized leastsquares problems in such multiagent systems. We propose a novel class of distributed protocols based on iterative thresholding and input driven consensus techniques, which are well-suited to work in-network when the communication to a central processing unit is not allowed. Estimation is performed by the agents themselves, which typically consist of devices with limited computational capabilities. This motivates us to develop low-complexity and low-memory algorithms that are feasible in real applications. Our main result is a rigorous proof of the convergence of these methods in regular networks. We introduce a suitable distributed, regularized, least-squares functional, and we prove that our algorithms reach their minima using results from dynamical systems theory. Furthermore, we propose numerical comparisons with the alternating direction method of multipliers and the distributed subgradient methods, in terms of performance, complexity, and memory usage. We conclude that our techniques are preferable for their good memory-accuracy tradeoff.Index Terms-Distributed optimization, input driven consensus algorithms, multi-agent systems, regularized linear inverse problems, sparse estimation. Chiara Ravazzi received the B.Sc.
Abstract. In this paper, we address the problem of simultaneous classification and estimation of hidden parameters in a sensor network with communications constraints. In particular, we consider a network of noisy sensors which measure a common scalar unknown parameter. We assume that a fraction of the nodes represent faulty sensors, whose measurements are poorly reliable. The goal for each node is to simultaneously identify its class (faulty or non-faulty) and estimate the common parameter.We propose a novel cooperative iterative algorithm which copes with the communication constraints imposed by the network and shows remarkable performance. Our main result is a rigorous proof of the convergence of the algorithm and a characterization of the limit behavior. We also show that, in the limit when the number of sensors goes to infinity, the common unknown parameter is estimated with arbitrary small error, while the classification error converges to that of the optimal centralized maximum likelihood estimator. We also show numerical results that validate the theoretical analysis and support their possible generalization. We compare our strategy with the Expectation-Maximization algorithm and we discuss trade-offs in terms of robustness, speed of convergence and implementation simplicity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.