Humans are impressive social learners. Researchers of cultural evolution have studied the many biases that enable solutions and behaviours to spread socially from one human to the next, selecting from whom we copy and what we copy. In a digital society, algorithmic and human agents both contribute to transmission of knowledge. One hypothesis is that machines may influence the patterns of social transmission not only by providing a means for spreading human behavior but also by providing novel behaviors themselves. We propose that certain algorithms might show (either by learning or by design) different behaviors, biases and problem-solving abilities than their human counterparts. This may in turn foster better decisions in environments where diversity in problem-solving strategies is beneficial. In this study, we ask whether machines with complementary biases to humans could boost cultural evolution in a lab-based planning task, where humans show suboptimal biases. We conducted a large behavioral study and an agent-based simulation to test the performance of transmission chains with human and machine players. In half of the chains, an algorithmic bot replaced a human participant. We show that the bot boosts the performance of immediately following participants in the chain, but this gain is lost for participants further down the transmission chain. Our findings suggest that machines can potentially improve performance, but human bias can hinder machine solutions from being preserved, especially under conditions of uncertainty or high cognitive load. Our results suggest that the conditions for hybrid social learning and cultural evolution may be limited by task environment and human biases.
Humans are impressive social learners. Researchers of cultural evolution have studied the many biases shaping cultural transmission by selecting who we copy from and what we copy. One hypothesis is that with the advent of superhuman algorithms a hybrid type of cultural transmission, namely from algorithms to humans, may have long-lasting effects on human culture. We suggest that algorithms might show (either by learning or by design) different behaviours, biases and problem-solving abilities than their human counterparts. In turn, algorithmic-human hybrid problem solving could foster better decisions in environments where diversity in problem-solving strategies is beneficial. This study asks whether algorithms with complementary biases to humans can boost performance in a carefully controlled planning task, and whether humans further transmit algorithmic behaviours to other humans. We conducted a large behavioural study and an agent-based simulation to test the performance of transmission chains with human and algorithmic players. We show that the algorithm boosts the performance of immediately following participants but this gain is quickly lost for participants further down the chain. Our findings suggest that algorithms can improve performance, but human bias may hinder algorithmic solutions from being preserved. This article is part of the theme issue ‘Emergent phenomena in complex physical and socio-technical systems: from cells to societies’.
Interactions between humans and bots are increasingly common online, prompting some legislators to pass laws that require bots to disclose their identity. The Turing test is a classic thought experiment testing humans’ ability to distinguish a bot impostor from a real human from exchanging text messages. In the current study, we propose a minimal Turing test that avoids natural language, thus allowing us to study the foundations of human communication. In particular, we investigate the relative roles of conventions and reciprocal interaction in determining successful communication. Participants in our task could communicate only by moving an abstract shape in a 2D space. We asked participants to categorize their online social interaction as being with a human partner or a bot impostor. The main hypotheses were that access to the interaction history of a pair would make a bot impostor more deceptive and interrupt the formation of novel conventions between the human participants. Copying their previous interactions prevents humans from successfully communicating through repeating what already worked before. By comparing bots that imitate behavior from the same or a different dyad, we find that impostors are harder to detect when they copy the participants’ own partners, leading to less conventional interactions. We also show that reciprocity is beneficial for communicative success when the bot impostor prevents conventionality. We conclude that machine impostors can avoid detection and interrupt the formation of stable conventions by imitating past interactions, and that both reciprocity and conventionality are adaptive strategies under the right circumstances. Our results provide new insights into the emergence of communication and suggest that online bots mining personal information, for example, on social media, might become indistinguishable from humans more easily.
Humans are increasingly interacting with algorithms, and these algorithms do not necessarily disclose their identity. The classic approach to humans’ ability to recognize bot impostors, known as the “Turing test”, is focused on natural language. In the current study, we avoid natural language in a minimal Turing test setup, opening up space to study the foundations of human communication. In particular, we are interested in the roles of emerging conventions and reciprocal interaction for successful communication. Our experiment asked participants to distinguish between a human partner and a bot impostor in online interactions relying only on virtual movements in a 2D space. The main hypothesis was that access to the interaction history of a pair would make a bot impostor more deceptive because it can interrupt the formation of novel conventions between the human participants. By comparing bots that imitate behavior from the same or a different dyad, we find that impostors are more deceptive when they copy the participants’ own partners and that this leads to less conventional interactions. We also show that reciprocity is beneficial for success when the bot impostor prevents conventionality. We conclude that machine impostors can impede their detection and the formation of stable conventions by imitating past interactions, and that both reciprocity and conventionality are adaptive strategies under the right circumstances. Our results provide new insights into the emergence of communication and imply an increased risk through bot impostors online when they can access personal information, e.g. on social media.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.