Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-1368
|View full text |Cite
|
Sign up to set email alerts
|

Expressiveness Influences Human Vocal Alignment Toward voice-AI

Abstract: This study explores whether people align to expressive speech spoken by a voice-activated artificially intelligent device (voice-AI), specifically Amazon's Alexa. Participants shadowed words produced by the Alexa voice in two acoustically distinct conditions: "regular" and "expressive", containing more exaggerated pitch contours and longer word durations. Another group of participants rated the shadowed items, in an AXB perceptual similarity task, as an assessment of overall degree of vocal alignment. Results … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…The Computers Are Social Actors (CASA) theory posits that despite the top-down knowledge that they are communicating with a computer, humans still treat computers as social actors, and behave similarly toward them as they would another person (Nass et al, 1994). A large body of research supports CASA and has shown that humans show similar alignment patterns toward computers as they do in human-human communication (Bell et al, 2003;Branigan et al, 2003;Cohn et al, 2019;Zellou et al, 2021b). These alignment patterns are motivated by linguistic differences and happen at various levels, including syntactically (Branigan et al, 2003;Pearson et al, 2004), phonetically (Cohn et al, 2019;Gessinger et al, 2021;Zellou et al, 2021b), lexically (Branigan et al, 2011;Cowan et al, 2015), and prosodically (Bell et al, 2003;Suzuki and Katagiri, 2007).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…The Computers Are Social Actors (CASA) theory posits that despite the top-down knowledge that they are communicating with a computer, humans still treat computers as social actors, and behave similarly toward them as they would another person (Nass et al, 1994). A large body of research supports CASA and has shown that humans show similar alignment patterns toward computers as they do in human-human communication (Bell et al, 2003;Branigan et al, 2003;Cohn et al, 2019;Zellou et al, 2021b). These alignment patterns are motivated by linguistic differences and happen at various levels, including syntactically (Branigan et al, 2003;Pearson et al, 2004), phonetically (Cohn et al, 2019;Gessinger et al, 2021;Zellou et al, 2021b), lexically (Branigan et al, 2011;Cowan et al, 2015), and prosodically (Bell et al, 2003;Suzuki and Katagiri, 2007).…”
Section: Introductionmentioning
confidence: 99%
“…These alignment patterns are motivated by linguistic differences and happen at various levels, including syntactically (Branigan et al, 2003;Pearson et al, 2004), phonetically (Cohn et al, 2019;Gessinger et al, 2021;Zellou et al, 2021b), lexically (Branigan et al, 2011;Cowan et al, 2015), and prosodically (Bell et al, 2003;Suzuki and Katagiri, 2007). Current research has further investigated these phenomena by assessing vocal alignment toward voice-enabled digital assistants (voice artificial intelligence or voice-AI), such as Amazon's Alexa and Apple's Siri (Cohn et al, 2019(Cohn et al, , 2021Zellou and Cohn, 2020;Zellou et al, 2021b;Aoki et al, 2022), and has found evidence that social factors, such as gender (Cohn et al, 2019;Snyder et al, 2019) and conversational role (Zellou et al, 2021b), additionally affect human-computer alignment.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation