“…In particular, they highlight that using language is one such "cue". In support of this view, speakers have been shown to vocally align their speech when talking to voice-AI interlocutors similarly to human interlocutors (Cohn, Predeck, et al, 2021;Zellou, Cohn, & Ferenc Segedin, 2021), and a growing body of work has shown that people perceive social attributes of voice-AI, including gender, age, race/ethnicity, and emotion (Cohn et al, 2019;Ernst & Herm-Stapelberg, 2020;Gessinger et al, 2022;Holliday, 2023;Zellou, Cohn, & Ferenc Segedin, 2021). In the present study, finding similar prosodic focus marking would suggest that the acoustic realization of information structure is part of this application of human-human social rules to voice-AI, suggesting that equivalence supersedes adaptations for a less-than-rational listener.…”