Designing Interactive Systems Conference 2022
DOI: 10.1145/3532106.3533528
|View full text |Cite
|
Sign up to set email alerts
|

Understanding the Negative Aspects of User Experience in Human-likeness of Voice-based Conversational Agents

Abstract: With advances in artifcial intelligence technology, Voice-based Conversational Agents (VCAs) can now imitate human abilities, sometimes almost indistinguishably from humans. However, concerns have been raised that too much perceived similarity can trigger threats and fears among users. This raises a question: Should VCAs be able to imitate humans perfectly? To address this, we explored what infuences the negative aspects of user experience in humanlike VCAs. We conducted a qualitative exploratory study to elic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 53 publications
(44 reference statements)
0
1
0
Order By: Relevance
“…They appreciated the professionalism of the AI assistant since it was not pretending to be human (e.g., "I liked that the AI assistant didn't have a name or personality, that would feel unprofessional" -P5-T). Prior work showed that dialogues imitating humans outside of the expressed purpose of a conversational agent could lead to negative experiences [40], thus conversational agents should keep their dialogues focused on the specific tasks that they were designed for. Although participants in this study liked that the AI assistant admitted knowledge gaps, responses that only contain "I don't know" may lead to negative perceptions of its usability [47].…”
Section: Discussionmentioning
confidence: 99%
“…They appreciated the professionalism of the AI assistant since it was not pretending to be human (e.g., "I liked that the AI assistant didn't have a name or personality, that would feel unprofessional" -P5-T). Prior work showed that dialogues imitating humans outside of the expressed purpose of a conversational agent could lead to negative experiences [40], thus conversational agents should keep their dialogues focused on the specific tasks that they were designed for. Although participants in this study liked that the AI assistant admitted knowledge gaps, responses that only contain "I don't know" may lead to negative perceptions of its usability [47].…”
Section: Discussionmentioning
confidence: 99%