We investigate how very large populations are able to reach a global consensus, out of local "microscopic" interaction rules, in the framework of a recently introduced class of models of semiotic dynamics, the so-called Naming Game. We compare in particular the convergence mechanism for interacting agents embedded in a low-dimensional lattice with respect to the mean-field case. We highlight that in low-dimensions consensus is reached through a coarsening process which requires less cognitive effort of the agents, with respect to the mean-field case, but takes longer to complete. In 1-d the dynamics of the boundaries is mapped onto a truncated Markov process from which we analytically computed the diffusion coefficient. More generally we show that the convergence process requires a memory per agent scaling as N and lasts a time N 1+2/d in dimension d ≤ 4 (the upper critical dimension), while in mean-field both memory and time scale as N 3/2 , for a population of N agents. We present analytical and numerical evidences supporting this picture.The past decade has seen an important development of the so-called Semiotic Dynamics, a new field which studies how conventions (or semiotic relations) can originate, spread and evolve over time in populations. This occurred mainly trough the definition of Language interaction games [1,2] in which a population of agents is seen as a complex adaptive system which self-organizes [3] as a result of simple local interactions (games). The interest of physicists for Language Games comes from the fact that they can be easily formulated as non-equilibrium statistical mechanics models of interacting agents: at each time step, an agent updates its state (among a certain set of possible states) through an interaction with its neighbors. An interesting question concerns the possibility of convergence towards a common state for all agents, which emerges without external global coordination and from purely local interaction rules [4,5,6]. In this Letter, we focus on the so-called Naming Games, introduced to describe the emergence of conventions and shared lexicons in a population of individuals interacting with each other by negotiations rules, and study how the embedding of the agents on a low-dimensional lattice influences the emergence of consensus, which we show to be reached through a coarsening process. The original model [7] was inspired by a well-known experiment of artificial intelligence called Talking Heads [8], in which embodied software agents develop their vocabulary observing objects through digital cameras, assigning them randomly chosen names and negotiating these names with other agents.Recently a new minimal version of the Naming Game endowed with simplified interactions rules [9] has been introduced, that reproduces the phenomenology of the experiments and is amenable to analytical treatment. In this model, N individuals (or agents) observe the same object, trying to communicate its name one to the other.
The Naming Game is a model of non-equilibriun dynamics for the self-organized emergence of a linguistic convention or a communication system in a population of agents with pairwise local interactions. We present an extensive study of its dynamics on complex networks, that can be considered as the most natural topological embedding for agents involved in language games and opinion dynamics. Except for some community structured networks on which metastable phases can be observed, agents playing the Naming Game always manage to reach a global consensus. This convergence is obtained after a time generically scaling with the population's size N as tconv ∼ N 1.4±0.1 , i.e. much faster than for agents embedded on regular lattices. Moreover, the memory capacity required by the system scales only linearly with its size. Particular attention is given to heterogenous networks, in which the dynamical activity pattern of a node depends on its degree. High degree nodes have a fundamental role, but require larger memory capacity. They govern the dynamics acting as spreaders of (linguistic) conventions. The effects of other properties, such as the average degree and the clustering, are also discussed.
We study the network dismantling problem, which consists of determining a minimal set of vertices in which removal leaves the network broken into connected components of subextensive size. For a large class of random graphs, this problem is tightly connected to the decycling problem (the removal of vertices, leaving the graph acyclic). Exploiting this connection and recent works on epidemic spreading, we present precise predictions for the minimal size of a dismantling set in a large random graph with a prescribed (lighttailed) degree distribution. Building on the statistical mechanics perspective, we propose a three-stage Min-Sum algorithm for efficiently dismantling networks, including heavy-tailed ones for which the dismantling and decycling problems are not equivalent. We also provide additional insights into the dismantling problem, concluding that it is an intrinsically collective problem and that optimal dismantling sets cannot be viewed as a collection of individually well-performing nodes.graph fragmentation | message passing | percolation | random graphs | influence maximization A network (a graph G in the discrete mathematics language) is a set V of N entities called nodes (or vertices), along with a set E of edges connecting some pairs of nodes. In a simplified way, networks are used to describe numerous systems in very diverse fields, ranging from social sciences to information technology or biological systems (reviews are in refs. 1 and 2). Several crucial questions in the context of network studies concern the modifications of the properties of a graph when a subset S of its nodes is selected and treated in a specific way. For instance, how much does the size of the largest connected component of the graph decrease if the vertices in S (along with their adjacent edges) are removed? Do the cycles survive this removal? What is the outcome of the epidemic spreading if the vertices in S are initially contaminated, constituting the seed of the epidemic? On the contrary, what is the influence of a vaccination of nodes in S preventing them from transmitting the epidemic? It is relatively easy to answer these questions when the set S is chosen randomly, with each vertex being selected with some probability independently. Classical percolation theory is nothing but the study of the connected components of a graph in which some vertices have been removed in this way.A much more interesting case is when the set S can be chosen in some optimal way. Indeed, in all applications sketched above, it is reasonable to assign some cost to the inclusion of a vertex in S: vaccination has a socioeconomic price, incentives must be paid to customers to convince them to adopt a new product in a viral marketing campaign, and incapacitating a computer during a cyber attack requires resources. Thus, one faces a combinatorial optimization problem: the minimization of the cost of S under a constraint on its effect on the graph. These problems thus exhibit both static and dynamic features, the former referring to the combinatori...
We consider the k-core decomposition of network models and Internet graphs at the autonomous system (AS) level. The k-core analysis allows to characterize networks beyond the degree distribution and uncover structural properties and hierarchies due to the specific architecture of the system. We compare the k-core structure obtained for AS graphs with those of several network models and discuss the differences and similarities with the real Internet architecture. The presence of biases and the incompleteness of the real maps are discussed and their effect on the k-core analysis is assessed with numerical experiments simulating biased exploration on a wide range of network models. We find that the k-core analysis provides an interesting characterization of the fluctuations and incompleteness of maps as well as information helping to discriminate the original underlying structure.2000 Mathematics Subject Classification. 68R10, 05C90, 68M07.
We study several bayesian inference problems for irreversible stochastic epidemic models on networks from a statistical physics viewpoint. We derive equations which allow to accurately compute the posterior distribution of the time evolution of the state of each node given some observations. At difference with most existing methods, we allow very general observation models, including unobserved nodes, state observations made at different or unknown times, and observations of infection times, possibly mixed together. Our method, which is based on the Belief Propagation algorithm, is efficient, naturally distributed, and exact on trees. As a particular case, we consider the problem of finding the "zero patient" of a SIR or SI epidemic given a snapshot of the state of the network at a later unknown time. Numerical simulations show that our method outperforms previous ones on both synthetic and real networks, often by a very large margin.arXiv:1307.6786v2 [q-bio.QM]
The problem of targeted network immunization can be defined as the one of finding a subset of nodes in a network to immunize or vaccinate in order to minimize a tradeoff between the cost of vaccination and the final (stationary) expected infection under a given epidemic model. Although computing the expected infection is a hard computational problem, simple and efficient mean-field approximations have been put forward in the literature in recent years. The optimization problem can be recast into a constrained one in which the constraints enforce local mean-field equations describing the average stationary state of the epidemic process. For a wide class of epidemic models, including the susceptible-infected-removed and the susceptible-infected-susceptible models, we define a message-passing approach to network immunization that allows us to study the statistical properties of epidemic outbreaks in the presence of immunized nodes as well as to find (nearly) optimal immunization sets for a given choice of parameters and costs. The algorithm scales linearly with the size of the graph, and it can be made efficient even on large networks. We compare its performance with topologically based heuristics, greedy methods, and simulated annealing on both random graphs and real-world networks.
We introduce a model of negotiation dynamics whose aim is that of mimicking the mechanisms leading to opinion and convention formation in a population of individuals. The negotiation process, as opposed to "herdinglike" or "bounded confidence" driven processes, is based on a microscopic dynamics where memory and feedback play a central role. Our model displays a nonequilibrium phase transition from an absorbing state in which all agents reach a consensus to an active stationary state characterized either by polarization or fragmentation in clusters of agents with different opinions. We show the existence of at least two different universality classes, one for the case with two possible opinions and one for the case with an unlimited number of opinions. The phase transition is studied analytically and numerically for various topologies of the agents' interaction network. In both cases the universality classes do not seem to depend on the specific interaction topology, the only relevant feature being the total number of different opinions ever present in the system.
In real networks complex topological features are often associated with a diversity of interactions as measured by the weights of the links. Moreover, spatial constraints may as well play an important role, resulting in a complex interplay between topology, weight, and geography. In order to study the vulnerability of such networks to intentional attacks, these attributes must be therefore considered along with the topological quantities. In order to tackle this issue, we consider the case of the worldwide airport network, which is a weighted heterogeneous network whose evolution and structure are influenced by traffic and geographical constraints. We first characterize relevant topological and weighted centrality measures and then use these quantities as selection criteria for the removal of vertices. We consider different attack strategies and different measures of the damage achieved in the network. The analysis of weighted properties shows that centrality driven attacks are capable to shatter the network's communication or transport properties even at very low level of damage in the connectivity pattern. The inclusion of weight and traffic therefore provides evidence for the extreme vulnerability of complex networks to any targeted strategy and need to be considered as key features in the finding and development of defensive strategies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.