2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA) 2017
DOI: 10.1109/cogsima.2017.7929605
|View full text |Cite
|
Sign up to set email alerts
|

Automation bias with a conversational interface: User confirmation of misparsed information

Abstract: We investigate automation bias for confirming erroneous information with a conversational interface. Participants in our studies used a conversational interface to report information in a simulated intelligence, surveillance, and reconnaissance (ISR) task. In the task, for flexibility and ease of use, participants reported information to the conversational agent in natural language. Then, the conversational agent interpreted the user's reports in a human-and machine-readable language. Next, participants could … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…These strategies were preferred because they explicitly acknowledge a potential breakdown, manifest initiative from the chatbot, and are actionable to recover from the breakdown. Zaroukian et al (2017), instead, identify an automation bias that may prevent the conversation with a chatbot to be successful. This phenomenon occurs when the chatbot misunderstands the user's request, providing her with an incorrect answer, and, nonetheless, the user exhibits a complacency behavior.…”
Section: Conversational Issuesmentioning
confidence: 99%
“…These strategies were preferred because they explicitly acknowledge a potential breakdown, manifest initiative from the chatbot, and are actionable to recover from the breakdown. Zaroukian et al (2017), instead, identify an automation bias that may prevent the conversation with a chatbot to be successful. This phenomenon occurs when the chatbot misunderstands the user's request, providing her with an incorrect answer, and, nonetheless, the user exhibits a complacency behavior.…”
Section: Conversational Issuesmentioning
confidence: 99%
“…Another issues for consideration is whether users with Low Veracity Exp suffer from automation bias [8,13,78,84,99]. Automation bias is often times referred to as over-reliance on automated outputs [8], over-trusting output from automated systems even when it is wrong [47,84].…”
Section: Results Interpretationmentioning
confidence: 99%
“…Experiences and developments on SA in the automotive and aviation sectors also hold important findings [73,74,75]. Case studies on SA in control rooms provide insight on workflows in the mentioned domains as documented in [76,20,24,77,69,17,18,35,78,79].…”
Section: Features Of Modern Grid Management Systemsmentioning
confidence: 91%