“…Note, however, that our study was adequately powered, and reanalysis of Brouwer’s data leads to the same conclusions. Another limitation connects with used procedure, that is, with use of text-to-speech application (Amazon Polly) that could be perceived as somewhat different than normal human voice (Cambre et al, 2020) and thus, differently affect moral judgements. As we mentioned earlier, used Amazon Polly to avoid problems with accented speech (e.g., Crowther et al, 2016) but this could create other artefacts.…”
Section: Discussionmentioning
confidence: 99%
“…We used Amazon Polly (a text-to-speech system) to avoid potential problems with accented speech: Such speech is not only more difficult to comprehend (Crowther et al, 2016), but an accented speaker is also judged as less credible (Lev-Ari & Keysar, 2010). Previous research utilising Amazon Polly observed promising results in terms of use Amazon Polly in behavioural research, suggesting that artificial voices sound quite naturally (Jeong et al, 2019), and are rated closely to real human speaker (Cambre et al, 2020). Despite these promising results, some dose of wary has to be adopted.…”
People’s judgements and decisions often change when made in their foreign language. Existing research testing this foreign language effect has predominantly used text-based stimuli with little research focusing on the impact of listening to audio stimuli on the effect. The only existing study on this topic found shifts in people’s moral decisions only in the audio modality. First, by reanalysing the data from this previous study and by collecting data in an additional experiment, we found no consistent effects of using foreign language on moral judgements. Second, in both data sets, we found no significant language by modality interaction. Overall, our results highlight the need for more robust testing of the foreign language effect, and its boundary conditions. However, modality of presentation does not appear to be a candidate for explaining its variability. Data and materials for this experiment are available at https://osf.io/qbjxn/ .
“…Note, however, that our study was adequately powered, and reanalysis of Brouwer’s data leads to the same conclusions. Another limitation connects with used procedure, that is, with use of text-to-speech application (Amazon Polly) that could be perceived as somewhat different than normal human voice (Cambre et al, 2020) and thus, differently affect moral judgements. As we mentioned earlier, used Amazon Polly to avoid problems with accented speech (e.g., Crowther et al, 2016) but this could create other artefacts.…”
Section: Discussionmentioning
confidence: 99%
“…We used Amazon Polly (a text-to-speech system) to avoid potential problems with accented speech: Such speech is not only more difficult to comprehend (Crowther et al, 2016), but an accented speaker is also judged as less credible (Lev-Ari & Keysar, 2010). Previous research utilising Amazon Polly observed promising results in terms of use Amazon Polly in behavioural research, suggesting that artificial voices sound quite naturally (Jeong et al, 2019), and are rated closely to real human speaker (Cambre et al, 2020). Despite these promising results, some dose of wary has to be adopted.…”
People’s judgements and decisions often change when made in their foreign language. Existing research testing this foreign language effect has predominantly used text-based stimuli with little research focusing on the impact of listening to audio stimuli on the effect. The only existing study on this topic found shifts in people’s moral decisions only in the audio modality. First, by reanalysing the data from this previous study and by collecting data in an additional experiment, we found no consistent effects of using foreign language on moral judgements. Second, in both data sets, we found no significant language by modality interaction. Overall, our results highlight the need for more robust testing of the foreign language effect, and its boundary conditions. However, modality of presentation does not appear to be a candidate for explaining its variability. Data and materials for this experiment are available at https://osf.io/qbjxn/ .
“…In a comparison study, Baird et al [10] found that different humanoid voices were rated at different levels of humanlikeness; the German male voice was rated the least humanlike. Cambre et al [20] similarly discovered that while non-TTS voices were preferred, the relative humanlikeness of the voice could not be used to determine the best voice for long-form content. They concluded that "the variation.…”
Section: Anthropomorphism Humanlikeness and Natural Vs Synthetic Voicesmentioning
Social robots, conversational agents, voice assistants, and other embodied AI are increasingly a feature of everyday life. What connects these various types of intelligent agents is their ability to interact with people through voice. Voice is becoming an essential modality of embodiment, communication, and interaction between computer-based agents and end-users. This survey presents a meta-synthesis on agent voice in the design and experience of agents from a human-centered perspective: voice-based human--agent interaction (vHAI). Findings emphasize the social role of voice in HAI as well as circumscribe a relationship between agent voice and body, corresponding to human models of social psychology and cognition. Additionally, changes in perceptions of and reactions to agent voice over time reveals a generational shift coinciding with the commercial proliferation of mobile voice assistants. The main contributions of this work are a vHAI classification framework for voice across various agent forms, contexts, and user groups, a critical analysis grounded in key theories, and an identification of future directions for the oncoming wave of vocal machines.
“…For example, HCI research often gendered as masculine (e.g., boardgames [117] or e-sports [139]) or feminine (e.g., fertility [43] or makeup [138]) routinely describes the omission of LGBTQ+ people as a limitation of their work. Some mentioned using -and having to justify using -datasets or technologies that only include binary genders, such as voice technology [31,194] and video-gamecharacter-creation tools [49,120,122]. A large body of research, while acknowledging the limitations of their methods, used a binary gender measure to explore gender biases or inequity among researchers themselves [35,68] and in algorithmic [7,17,227] and CSCW [64,74,212] systems.…”
Section: Becoming a Limitation Or Future Workmentioning
LGBTQ+ people have received increased attention in HCI research, paralleling a greater emphasis on social justice in recent years. However, there has not been a systematic review of how LGBTQ+ people are researched or discussed in HCI. In this work, we review all research mentioning LGBTQ+ people across the HCI venues of CHI, CSCW, DIS, and TOCHI. Since 2014, we find a linear growth in the number of papers substantially about LGBTQ+ people and an exponential increase in the number of mentions. Research about LGBTQ+ people tends to center experiences of being politicized, outside the norm, stigmatized, or highly vulnerable. LGBTQ+ people are typically mentioned as a marginalized group or an area of future research. We identify gaps and opportunities for (1) research about and (2) the discussion of LGBTQ+ in HCI and provide a dataset to facilitate future Queer HCI research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.