Humans are impressive social learners. Researchers of cultural evolution have studied the many biases that enable solutions and behaviours to spread socially from one human to the next, selecting from whom we copy and what we copy. In a digital society, algorithmic and human agents both contribute to transmission of knowledge. One hypothesis is that machines may influence the patterns of social transmission not only by providing a means for spreading human behavior but also by providing novel behaviors themselves. We propose that certain algorithms might show (either by learning or by design) different behaviors, biases and problem-solving abilities than their human counterparts. This may in turn foster better decisions in environments where diversity in problem-solving strategies is beneficial. In this study, we ask whether machines with complementary biases to humans could boost cultural evolution in a lab-based planning task, where humans show suboptimal biases. We conducted a large behavioral study and an agent-based simulation to test the performance of transmission chains with human and machine players. In half of the chains, an algorithmic bot replaced a human participant. We show that the bot boosts the performance of immediately following participants in the chain, but this gain is lost for participants further down the transmission chain. Our findings suggest that machines can potentially improve performance, but human bias can hinder machine solutions from being preserved, especially under conditions of uncertainty or high cognitive load. Our results suggest that the conditions for hybrid social learning and cultural evolution may be limited by task environment and human biases.
Humans are impressive social learners. Researchers of cultural evolution have studied the many biases shaping cultural transmission by selecting who we copy from and what we copy. One hypothesis is that with the advent of superhuman algorithms a hybrid type of cultural transmission, namely from algorithms to humans, may have long-lasting effects on human culture. We suggest that algorithms might show (either by learning or by design) different behaviours, biases and problem-solving abilities than their human counterparts. In turn, algorithmic-human hybrid problem solving could foster better decisions in environments where diversity in problem-solving strategies is beneficial. This study asks whether algorithms with complementary biases to humans can boost performance in a carefully controlled planning task, and whether humans further transmit algorithmic behaviours to other humans. We conducted a large behavioural study and an agent-based simulation to test the performance of transmission chains with human and algorithmic players. We show that the algorithm boosts the performance of immediately following participants but this gain is quickly lost for participants further down the chain. Our findings suggest that algorithms can improve performance, but human bias may hinder algorithmic solutions from being preserved. This article is part of the theme issue ‘Emergent phenomena in complex physical and socio-technical systems: from cells to societies’.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.