2015
DOI: 10.1007/978-3-319-25554-5_7
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Interaction of Persistent Robots to User Temporal Preferences

Abstract: Abstract. We look at the problem of enabling a mobile service robot to autonomously adapt to user preferences over repeated interactions in a long-term time frame, where the user provides feedback on every interaction in the form of a rating. We assume that the robot has a discrete and finite set of interaction options from which it has to choose one at every encounter with a given user. We first present three models of users which span the spectrum of possible preference profiles and their dynamics, incorpora… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 16 publications
0
9
0
Order By: Relevance
“…Additionally, we are interested in the effects of the embodiment of the system. A recent literature survey on the Tsiakas et al [7] reinforcement learning user performance, session state adjust time of movement, move to next exercise, encourage user Leite et al [5] multi-armed bandit learning user's detected valence choose appropriate emphatic behavior Leyzberg et al [3] Bayesian net puzzle state provide personalized tutoring sessions Lim et al [8] hybrid filtering semantic knowledge, event episodic knowledge and emotion enhance student's motivation to prevent negative emotions Baraka et al [9] multi-armed bandit learning numerical reward provided by user robot's light animation Mitsunaga et al [6] reinforcement learning body signals adjust interaction distance, gaze, motion speed and timing Hemminghaus et al [10] reinforcement learning (Q-Learning) gaze behavior, speech, game state memory game assistance Chan et al [11] hierarchical reinforcement learning speech analysis, user state, activity state giving instructions, empathy or help Lee et al [12] Wizard of Oz snack choices patterns, usage patterns, robot's prior behavior personalized speech topics effects of embodiment showed: "that a co-present, physical robot performed better than a virtual agent simulated using computer graphics. These studies found a co-present robot to be more persuasive, receive more attention and be perceived more positively than a virtual agent even when the behavior of the robot was identical to the behavior of the virtual agent and when both agents had similar appearance" [19].…”
Section: A Research Questionmentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, we are interested in the effects of the embodiment of the system. A recent literature survey on the Tsiakas et al [7] reinforcement learning user performance, session state adjust time of movement, move to next exercise, encourage user Leite et al [5] multi-armed bandit learning user's detected valence choose appropriate emphatic behavior Leyzberg et al [3] Bayesian net puzzle state provide personalized tutoring sessions Lim et al [8] hybrid filtering semantic knowledge, event episodic knowledge and emotion enhance student's motivation to prevent negative emotions Baraka et al [9] multi-armed bandit learning numerical reward provided by user robot's light animation Mitsunaga et al [6] reinforcement learning body signals adjust interaction distance, gaze, motion speed and timing Hemminghaus et al [10] reinforcement learning (Q-Learning) gaze behavior, speech, game state memory game assistance Chan et al [11] hierarchical reinforcement learning speech analysis, user state, activity state giving instructions, empathy or help Lee et al [12] Wizard of Oz snack choices patterns, usage patterns, robot's prior behavior personalized speech topics effects of embodiment showed: "that a co-present, physical robot performed better than a virtual agent simulated using computer graphics. These studies found a co-present robot to be more persuasive, receive more attention and be perceived more positively than a virtual agent even when the behavior of the robot was identical to the behavior of the virtual agent and when both agents had similar appearance" [19].…”
Section: A Research Questionmentioning
confidence: 99%
“…These approaches mostly utilize reinforcement learning with user feedback and sensor data (e.g. [1], [5]- [7], [9]). Other approaches create user models to adapt the robot's assistance and behavior ( [3], [20]) or rely on techniques from recommendation systems like collaborative filtering [8].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In education, it has been applied to Socially Assistive Robot (SAR) tutors that support the teaching task [5,10,12]. Baraka and Veloso [2] define three user models to adapt the luminous interactions between a robot and the user over time, learning the model parameters from user feedback. Personalized collaboration is shown in Fiore et al [8], where an object manipulation task is performed jointly by the robot and the user whose preferences are taken into account.…”
Section: Related Workmentioning
confidence: 99%
“…Consequently, it is assumed that the model to which a particular user belongs to is known prior to the interaction. However, this approach is more appropriate for long-term HRI applications, where the agent adapts over repeated interactions with the users [4]. In this work, we focus on agents that are able to interactively reuse and modify a learned policy during a short-term interaction, without requiring an existing model of the specific user.…”
Section: Robot Learning and Adaptationmentioning
confidence: 99%