2019
DOI: 10.1007/978-3-030-35888-4_7
|View full text |Cite
|
Sign up to set email alerts
|

Individual Differences in Attitude Toward Robots Predict Behavior in Human-Robot Interaction

Abstract: Humans are influenced by the presence of other social agents, sometimes performing better, sometimes performing worse than alone. Humans are also affected by how they perceive the social agent. The present study investigated whether individual differences in the attitude toward robots can predict human behavior in Human-Robot Interaction (HRI). Therefore, adult participants played a game with the Cozmo robot (Anki Inc., San Francisco), in which their task was to stop a balloon from exploding. In individual tri… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 22 publications
(14 citation statements)
references
References 20 publications
(22 reference statements)
1
13
0
Order By: Relevance
“…From a purely anecdotal point of view, during the debriefing, a small group of participants reported that they were surprised seeing the human behaving "like a robot" (i.e., during the calibrating behavior). We claim that humans approach artificial agents and their conspecifics with different attitudes that could modulate the way they interpret behaviors (Hinz et al, 2019). Based on our results, we can also speculate that participants were expecting the robot to display a variety of behaviors (i.e., to behave like a human), but they were not expecting the human to behave in a repetitive, mechanistic way (i.e., to behave like a robot).…”
Section: Discussion-experimentssupporting
confidence: 49%
“…From a purely anecdotal point of view, during the debriefing, a small group of participants reported that they were surprised seeing the human behaving "like a robot" (i.e., during the calibrating behavior). We claim that humans approach artificial agents and their conspecifics with different attitudes that could modulate the way they interpret behaviors (Hinz et al, 2019). Based on our results, we can also speculate that participants were expecting the robot to display a variety of behaviors (i.e., to behave like a human), but they were not expecting the human to behave in a repetitive, mechanistic way (i.e., to behave like a robot).…”
Section: Discussion-experimentssupporting
confidence: 49%
“…In the same manner, studies have observed that an individual's attitude is of great importance to form consumer behavior toward technology-based products and services. For example, the significant association between attitude and behavior has been validated with the adoption of information systems [45], mobile applications [39], and robots [46]. For example, Hwang et al [22] examined the relationship between individual attitudes towards innovative technological services and behavioral intentions, and they confirmed a significant link.…”
Section: Attitudementioning
confidence: 99%
“…Planned comparisons showed that users who interacted with the iCub robot displaying a mechanical error reported a lower ISS in the post-interaction session compared to the pre-interaction session [43.49 vs. 39.10, respectively; t 30 =-2.80, p= .013, d'=.70]. Whereas no significant differences in the ISS before and after the interactive task were found for those users who interacted with the human-like erring robot [39.09 vs. 40.38, respectively, t 30 The analysis on the effect of individual differences on the variation of the ISS before and after and interactive task revealed that the ΔISS was predicted by the "Warmth" subscale of the RoSAS [β = -4.67, t 28 = -3.08, p = 0.005].…”
Section: Resultsmentioning
confidence: 99%
“…Our results are in line with previous works investigating the violation of expectations in non-verbal communication during HRI [28,29], showing that when people experience unexpected behavior in HRI, the value (positive or negative) of the violation affects the possibility to perceive the robot as a social partner. According to the expectancy violations theory [30], when co-agents violate our expectations, we react differently depending on the value (positive or negative) of the violation. This is also mediated by individual differences in the pre-interaction state [30,31].…”
Section: Discussionmentioning
confidence: 99%