Although more individuals are relying on information provided by nonhuman agents, such as artificial intelligence and robots, little research has examined how persuasion attempts made by nonhuman agents might differ from persuasion attempts made by human agents. Drawing on construal-level theory, we posited that individuals would perceive artificial agents at a low level of construal because of the agents’ lack of autonomous goals and intentions, which directs individuals’ focus toward how these agents implement actions to serve humans rather than why they do so. Across multiple studies (total N = 1,668), we showed that these construal-based differences affect compliance with persuasive messages made by artificial agents. These messages are more appropriate and effective when the message represents low-level as opposed to high-level construal features. These effects were moderated by the extent to which an artificial agent could independently learn from its environment, given that learning defies people’s lay theories about artificial agents.
The present research demonstrates how consumer responses to negative and positive offers are influenced by whether the administering marketing agent is an Artificial Intelligence (AI) or a human. In the case of a product or service offer that is worse than expected, consumers respond better when dealing with an AI agent in the form of increased purchase likelihood and satisfaction. In contrast, for a better than expected offer, consumers respond more positively to a human agent. We demonstrate that AI agents, in comparison to human agents, are perceived to have weaker intentions when administering offers, which accounts for this effect. That is, consumers infer that AI agents lack selfish intentions in the case of an offer that favors the agent and lack benevolent intentions in the case of an offer that favors the customer, thereby dampening the extremity of consumer responses. Moreover, we demonstrate a moderating effect such that marketers may anthropomorphize AI agents to strengthen perceived intentions, providing an avenue to receive due credit from consumers when providing a better offer and mitigate blame when providing a worse offer. Potential ethical concerns with the use of AI to bypass consumer resistance to negative offers are discussed.
The use of Artificial Intelligence (AI) has grown rapidly in the service industry and AI’s emotional capabilities have become an important feature for interacting with customers. The current research examines personal disclosures that occur during consumer interactions with AI and humans in service settings. We found that consumers’ lay beliefs about AI (i.e., a perceived lack of social judgment capability) lead to enhanced disclosure of sensitive personal information to AI (vs. humans). We identify boundaries for this effect such that consumers prefer disclosure to humans over AI in (i) contexts where social support (rather than social judgment) is expected and (ii) contexts where sensitive information will be curated by the agent for social dissemination. In addition, we reveal underlying psychological processes such that the motivation to avoid negative social judgment favors disclosing to AI whereas seeking emotional support favors disclosing to humans. Moreover, we reveal that adding humanlike factors to AI can increase consumer fear of social judgment (reducing disclosure in contexts of social risk) while simultaneously increasing perceived AI capacity for empathy (increasing disclosure in contexts of social support). Taken together, these findings provide theoretical and practical insights into tradeoffs between utilizing AI versus human agents in service contexts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.