We explore a new mechanism to explain polarization phenomena in opinion dynamics in which agents evaluate alternative views on the basis of the social feedback obtained on expressing them. High support of the favored opinion in the social environment, is treated as a positive feedback which reinforces the value associated to this opinion. In connected networks of sufficiently high modularity, different groups of agents can form strong convictions of competing opinions. Linking the social feedback process to standard equilibrium concepts we analytically characterize sufficient conditions for the stability of bipolarization. While previous models have emphasized the polarization effects of deliberative argumentbased communication, our model highlights an affective experience-based route to polarization, without assumptions about negative influence or bounded confidence.
Abstract. This paper introduces a Markov chain approach that allows a rigorous analysis of agent based opinion dynamics as well as other related agent based models (ABM). By viewing the ABM dynamics as a micro description of the process, we show how the corresponding macro description is obtained by a projection construction. Then, well known conditions for lumpability make it possible to establish the cases where the macro model is still Markov. In this case we obtain a complete picture of the dynamics including the transient stage, the most interesting phase in applications. For such a purpose a crucial role is played by the type of probability distribution used to implement the stochastic part of the model which defines the updating rule and governs the dynamics. In addition, we show how restrictions in communication leading to the co-existence of different opinions follow from the emergence of new absorbing states. We describe our analysis in detail with some specific models of opinion dynamics. Generalizations concerning different opinion representations as well as opinion models with other interaction mechanisms are also discussed. We find that our method may be an attractive alternative to mean-field approaches and that this approach provides new perspectives on the modeling of opinion exchange dynamics, and more generally of other ABM.
AcknowledgementsScience needs freedom. It needs free thought, it needs free time, it needs free talk. Philippe Blanchard is among those teachers who are deeply committed to such an understanding of science. I am very proud and grateful to be among his students.I am also very grateful to Ricardo Lima. He is probably the person who engaged most in the details of this project and should really be an honorary member of the reading committee. Thank you for always inspiring discussions and the critical reading of all parts of this work.Tanya Araújo has given me encouragement since the first day we met. She also read through all the thesis and her advises (especially during the last turbulent months) helped a lot to finalize the writing.I was once told that those people who are most busy (those that really are and do not just pretend to be) are also the people who always have time when you approach them with some question or meet them on the floor. I don't know whether this is a general rule, but it certainly applies to Dima Volchenkov. Thank you for an open ear whenever I knocked on your door.A special thanks goes to Hanne Litschewsky, our great secretary in E5. Due to Hanne, I could experience how comfortable, how helpful, yet sometimes essential it is to be supported in all the bureaucratic aspects of science.All of this would have been a lot more difficult without the unconditional support of my family. I am very thankful to my parents, my parents-in-law, and especially to my wife Nannette.Financial support of the German Federal Ministry of Education and Research (BMBF) through the project Linguistic Networks is also gratefully acknowledged (http://project.linguistic-networks.net). AbstractThis thesis introduces a Markov chain approach that allows a rigorous analysis of a class of agent-based models (ABMs). It provides a general framework of aggregation in agent-based and related computational models by making use of Markov chain aggregation and lumpability theory in order to link between the micro and the macro level of observation.The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent model, which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. This is referred to as micro chain, and an explicit formal representation including microscopic transition rates can be derived for a class of models by using the random mapping representation of a Markov process. The explicit micro formulation enables the application of the theory of Markov chain aggregation -namely, lumpability -in order to reduce the state space of the micro chain and relate microscopic descriptions to a macroscopic formulation of interest. Well-known conditions for lumpability make it possible to establish the cases where the macro model is still Markov, and in this case we obtain a complete picture of the dynamics including the transient stage, the most interesting phase in applications.F...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.