2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967924
|View full text |Cite
|
Sign up to set email alerts
|

Fast Adaptation with Meta-Reinforcement Learning for Trust Modelling in Human-Robot Interaction

Abstract: In socially assistive robotics, an important research area is the development of adaptation techniques and their effect on human-robot interaction. We present a metalearning based policy gradient method for addressing the problem of adaptation in human-robot interaction and also investigate its role as a mechanism for trust modelling. By building an escape room scenario in mixed reality with a robot, we test our hypothesis that bi-directional trust can be influenced by different adaptation algorithms. We found… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 37 publications
0
4
0
Order By: Relevance
“…Further, it can be using different reinforcement learning algorithms to select the behaviours such as Exp3 v.s. policy gradient based solution for Exp3 problem, together with meta-learning (see Gao et al (2019)). Secondly, we can improve the method to compute the emotional state of the users by using a state of the art emotion recognition API and also rethinking the method to compute emotion on the given state in the game.…”
Section: Discussionmentioning
confidence: 99%
“…Further, it can be using different reinforcement learning algorithms to select the behaviours such as Exp3 v.s. policy gradient based solution for Exp3 problem, together with meta-learning (see Gao et al (2019)). Secondly, we can improve the method to compute the emotional state of the users by using a state of the art emotion recognition API and also rethinking the method to compute emotion on the given state in the game.…”
Section: Discussionmentioning
confidence: 99%
“…In particular, such extrinsic perspectives broadly include the explainability, privacy protection for sensitive individual information, ethics, and general human trust on trained RL agents or models. Different trustworthy RL algorithms have also been studied in the literature by considering such human-centric design [4,51,128,175] to bridge the trust with human.…”
Section: How To Achieve the Human-centric Design For Trustworthy Rl?mentioning
confidence: 99%
“…Robots can use speech to change the content of the conversation (Gamborino and Fu, 2018) or to answer a question about the surrounding environment (Bui and Chong, 2018). Robots can use dialogue to gather information during collaborative teleoperation (Fong et al, 2003), to engender trust in an escape room (Gao et al, 2019), or to facilitate collaboration between two targets of assistance (Strohkorb et al, 2016). Robots can also talk about themselves to influence a user's view of themselves.…”
Section: Human Brainmentioning
confidence: 99%