2019
DOI: 10.1016/j.chb.2019.03.033
|View full text |Cite
|
Sign up to set email alerts
|

The effect of conversational agent skill on user behavior during deception

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
18
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 44 publications
(21 citation statements)
references
References 56 publications
3
18
0
Order By: Relevance
“…Using psychophysiological data, they conclude that the simpler the chatbot (simpler texts and animations), the less strange and negative the conversation. Similar conclusions have been reached by Schuetzler et al [57], who observe that the better the chatbot's conversational skills, the more counter‐indications produced when trying to discover deceptions on the part of the interlocutor. Conversation between a student and a chatbot may lead to frustration [35] if it is not fluid enough or if the chatbot does not give the answers the student is asking for.…”
Section: Resultssupporting
confidence: 85%
“…Using psychophysiological data, they conclude that the simpler the chatbot (simpler texts and animations), the less strange and negative the conversation. Similar conclusions have been reached by Schuetzler et al [57], who observe that the better the chatbot's conversational skills, the more counter‐indications produced when trying to discover deceptions on the part of the interlocutor. Conversation between a student and a chatbot may lead to frustration [35] if it is not fluid enough or if the chatbot does not give the answers the student is asking for.…”
Section: Resultssupporting
confidence: 85%
“…Schuetzler et al [83] present a very interesting study that addresses how the conversational skills of an agent elicit deception and compare the behaviour of deceiving users compared to truthful interactions. Gratch et al [54] performed a study with three settings: human-human, Wizard-of-Oz (where the users believed they were interacting with an automatic agent when they were in fact interacting with a human), and a conversational agent (the users knowingly interacted with an agent).…”
Section: Deception and Impression Managementmentioning
confidence: 99%
“…A CA is a computer system designed to communicate with people in natural languages [26][27][28]. Most CAs are designed based on similar technology, whereas the application of these CAs varies significantly according to their purposes [27].…”
Section: Conversational Agentmentioning
confidence: 99%
“…The scores of nine items were measured on a scale ranging from 0 (not at all) to 3 (nearly every day), and these scores were added to derive the final score. A degree of depression of counselees was determined based on the scores and classified into five stages, including none (0-4), mild (5-9), moderate (10)(11)(12)(13)(14)(15)(16)(17)(18)(19), severe (20)(21)(22)(23)(24)(25)(26)(27), high risk of suicide (answer to the risk of suicidality). Those who received scores corresponding to severe and high risk of suicide stages were excluded from the experiment and guided to professional counseling institutes.…”
Section: Assessmentmentioning
confidence: 99%