Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction 2017
DOI: 10.1145/3029798.3038300
|View full text |Cite
|
Sign up to set email alerts
|

"It's not my Fault!"

Abstract: We investigated the effects of the deceptive behaviour of a robot, hypothesising that a lying robot would be perceived as more intelligent and human-like, but less trustworthy than a non-lying robot. The participants engaged in a collaborative task with the non-lying and lying humanoid robot NAO. Apart from subjective responses, a more objective measure of trust was provided by the trust game. Our results confirmed that the lying robot was perceived as less trustworthy. However, we have found no indication of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…For example, finding excuses can be perceived to be deceptive, selfabsorbed, and ineffectual (Schlenker et al, 2001), thus diminishing trustors' willingness to reconcile (Tomlinson et al, 2004). Since people do not like lying robots (Wijnen et al, 2017), external attributions can be counterproductive when trustors are not convinced of robots' innocence. Given both studies found apology with internal attributions rehabilitate trust better for competence-based violations, it is also expected to acquire similar findings in the HRI settings:…”
Section: Apology With Internal and External Attributionsmentioning
confidence: 99%
“…For example, finding excuses can be perceived to be deceptive, selfabsorbed, and ineffectual (Schlenker et al, 2001), thus diminishing trustors' willingness to reconcile (Tomlinson et al, 2004). Since people do not like lying robots (Wijnen et al, 2017), external attributions can be counterproductive when trustors are not convinced of robots' innocence. Given both studies found apology with internal attributions rehabilitate trust better for competence-based violations, it is also expected to acquire similar findings in the HRI settings:…”
Section: Apology With Internal and External Attributionsmentioning
confidence: 99%
“…The more powerful resource allocation to humans is, the higher people's trust in robots will be. The self-interested unfair behaviors of robots go against human interests, and robots that violate social norms are blamed by human beings and receive less human-robot trust (Wijnen et al, 2017).…”
Section: Discussionmentioning
confidence: 99%