2022
DOI: 10.1007/s43681-022-00165-5
|View full text |Cite
|
Sign up to set email alerts
|

Trusting social robots

Abstract: In this paper, I argue that we need a more robust account of our ability and willingness to trust social robots. I motivate my argument by demonstrating that existing accounts of trust and of trusting social robots are inadequate. I identify that it is the feature of a façade or deception inherent in our engagement with social robots that both facilitates, and is in danger of undermining, trust. Finally, I utilise the fictional dualism model of social robots to clarify that trust in social robots, unlike trust… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 20 publications
(19 reference statements)
0
3
0
Order By: Relevance
“…The correlation for authenticity and professionalism were weak. This lack of correlation between trust on robots and AI ratings can be understood through the anthropomorphic form of commitment between the truster and the trustee ( Sweeney, 2023 ). Holton (1994) suggests that trust involves a specific type of reliance, where betrayal is felt if expectations are not met, contrasting with mere disappointment from machine failure.…”
Section: Discussionmentioning
confidence: 99%
“…The correlation for authenticity and professionalism were weak. This lack of correlation between trust on robots and AI ratings can be understood through the anthropomorphic form of commitment between the truster and the trustee ( Sweeney, 2023 ). Holton (1994) suggests that trust involves a specific type of reliance, where betrayal is felt if expectations are not met, contrasting with mere disappointment from machine failure.…”
Section: Discussionmentioning
confidence: 99%
“…Replika, My AI Friend, Luka Inc.: https://replika.ai/; beingAI, https: //beingai.com/ (accessed on 31 August 2022)). That causes ethical, legal, and social issues that designers and developers should be aware of before launching their applications [65]. Safety would be a concern as social robots may be employed without taking care of a human's physical and psychological integrity.…”
Section: Discussionmentioning
confidence: 99%
“…dementia, autism) develop an attachment to the artificial being that is preferred over actual human contact, which some may deem as 'inappropriate.' With vulnerable patients, the core value of dignity is at stake (e.g., [66]), with the objectification of human friendship, deception by artificiality, a faked identity (of the robot), and trust [65]. If indeed, people trust a robot better than humans on social media; that could stimulate evasive behaviors, escapism even, not dealing with issues humans have among each other.…”
Section: Discussionmentioning
confidence: 99%