Material punishment has been suggested to play a key role in sustaining human cooperation. Experimental findings, however, show that inflicting mere material costs does not always increase cooperation and may even have detrimental effects. Indeed, ethnographic evidence suggests that the most typical punishing strategies in human ecologies (e.g., gossip, derision, blame and criticism) naturally combine normative information with material punishment. Using laboratory experiments with humans, we show that the interaction of norm communication and material punishment leads to higher and more stable cooperation at a lower cost for the group than when used separately. In this work, we argue and provide experimental evidence that successful human cooperation is the outcome of the interaction between instrumental decision-making and the norm psychology humans are provided with. Norm psychology is a cognitive machinery to detect and reason upon norms that is characterized by a salience mechanism devoted to track how much a norm is prominent within a group. We test our hypothesis both in the laboratory and with an agent-based model. The agent-based model incorporates fundamental aspects of norm psychology absent from previous work. The combination of these methods allows us to provide an explanation for the proximate mechanisms behind the observed cooperative behaviour. The consistency between the two sources of data supports our hypothesis that cooperation is a product of norm psychology solicited by norm-signalling and coercive devices.
Abstract-Social conventions are useful self-sustaining protocols for groups to coordinate behavior without a centralized entity enforcing coordination. We perform an in-depth study of different network structures, to compare and evaluate the effects of different network topologies on the success and rate of emergence of social conventions. While others have investigated memory for learning algorithms, the effects of memory or history of past activities on the reward received by interacting agents have not been adequately investigated. We propose a reward metric that takes into consideration the past action choices of the interacting agents. The research question to be answered is what effect does the history based reward function and the learning approach have on convergence time to conventions in different topologies. We experimentally investigate the effects of history size, agent population size and neighborhood size or the emergence of social conventions.
Internalization is at study in social-behavioural sciences and moral philosophy since long; of late, the debate was revamped within the rationality approach to the study of cooperation and compliance since internalization is a less costly and more reliable enforcement system than social control. But how does it work? So far, poor attention was paid to the mental underpinnings of internalization. This paper advocates a rich cognitive model of different types, degrees and factors of internalization. In order to check the individual and social effect of internalization, we have adapted an existing agent architecture, EMIL-A, providing it with internalization capabilities, turning it into EMIL-I-A. Experiments have proven satisfactory results with respect to the maintenance of cooperation in a proof-of-concept simulation.
Convention emergence solves the problem of choosing, in a decentralized way and among all equally beneficial conventions, the same convention for the entire population in the system for their own benefit. Our previous work has shown that reaching 100% agreement is not as straighforward as assumed by previous researchers, that, in order to save computational resources fixed the convergence rate to 90% (measuring the time it takes for 90% of the population to coordinate on the same action). In this article we present the notion of social instruments as a set of mechanisms that facilitate and accelerate the emergence of norms from repeated interactions between members of a society, only accessing local and public information and thus ensuring agents' privacy and anonymity. Specifically, we focus on two social instruments: rewiring and observation. Our main goal is to provide agents with tools that allow them to leverage their social network of interactions while effectively addressing coordination and learning problems, paying special attention to dissolving metastable subconventions.The first experimental results show that even with the usage of the proposed instruments, convergence is not accelerated or even obtained in irregular networks. This result leads us to perform an exhaustive analysis of irregular networks discovering what we have defined as Self-Reinforcing Structures (SRS). The SRS are topological configurations of nodes that promote the establishment and persistence of subconventions by producing a continuous reinforcing effect on the frontier agents. Finally, we propose a more sophisticated composed social instrument (observation + rewiring) for robust resolution of subconventions, which works by the dissolution of the stable frontiers caused by the Self-Reinforcing Substructures (SRS) within the social network.
In an environment in which free-riders are better off than cooperators, social control is required to foster and maintain cooperation. There are two main paths through which social control can be applied: punishment and reputation. Our experiments explore the efficacy of punishment and reputation on cooperation rates, both in isolation and in combination. Using a Public Goods Game, we are interested in assessing how cooperation rates change when agents can play one of two different reactive strategies, i.e., they can pay a cost in order to reduce the payoff of free-riders, or they can know others' reputation and then either play defect with free-riders, or refuse to interact with them. Cooperation is maintained at a high level through punishment, but also reputation-based partner selection proves effective in maintaining cooperation. However, when agents are informed about free-riders' reputation and play Defect, cooperation decreases. Finally, a combination of punishment and reputation-based partner selection leads to higher cooperation rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.