With advances in artifcial intelligence technology, Voice-based Conversational Agents (VCAs) can now imitate human abilities, sometimes almost indistinguishably from humans. However, concerns have been raised that too much perceived similarity can trigger threats and fears among users. This raises a question: Should VCAs be able to imitate humans perfectly? To address this, we explored what infuences the negative aspects of user experience in humanlike VCAs. We conducted a qualitative exploratory study to elicit participants' perceptions and feelings of human-like VCAs through comparable video prototypes of human-agent conversation and human-human conversation. We discovered that the dialogues of the human-likeness outside of the expressed purpose of a VCA and expressions pretending to come from a human identity could lead to negative experiences with VCAs. Based on our fndings, we discussed design directions for overcoming potential issues of human imitation.
CCS CONCEPTS• Human-centered computing → Empirical studies in interaction design.