2020
DOI: 10.1007/978-3-030-49778-1_22
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Joke Generation and Paralinguistic Personalization for a Socially-Aware Robot

Abstract: Robot humor is typically scripted by the human. This work presents a socially-aware robot which generates multimodal jokes for use in real-time human-robot dialogs, including appropriate prosody and non-verbal behaviors. It personalizes the paralinguistic presentation strategy based on socially-aware reinforcement learning, which interprets human social signals and aims to maximize user amusement.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…However, it cannot capture human aspects, such as the user's behavior, personality, or mood. Thus, interaction distance, gaze and smile [Fournier et al 2017, Gordon et al 2016, Hemminghaus and Kopp 2017, Leite et al 2011, motion speed, timing [Mitsunaga et al 2008], gesture and posture [Najar et al 2016, and laughter [Hayashi et al 2008, Katevas et al 2015, Knight 2011, Ritschel et al 2020a, Weber et al 2018 are used in various contexts as feedback for social agents. Physiological feedback includes ECG [Liu et al 2008] or EEG [Tsiakas et al 2018] data.…”
Section: Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…However, it cannot capture human aspects, such as the user's behavior, personality, or mood. Thus, interaction distance, gaze and smile [Fournier et al 2017, Gordon et al 2016, Hemminghaus and Kopp 2017, Leite et al 2011, motion speed, timing [Mitsunaga et al 2008], gesture and posture [Najar et al 2016, and laughter [Hayashi et al 2008, Katevas et al 2015, Knight 2011, Ritschel et al 2020a, Weber et al 2018 are used in various contexts as feedback for social agents. Physiological feedback includes ECG [Liu et al 2008] or EEG [Tsiakas et al 2018] data.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…These signals are often aggregated and combined in order to build a user model and calculate reward, e.g. based on the human's estimated affect/emotions [Broekens and Chetouani 2019, Gordon et al 2016, Leite et al 2011, engagement [Mancini et al 2019, Ritschel 2018, curiosity [Fournier et al 2017], amusement , Ritschel et al 2020a, Weber et al 2018] and more.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…However, non-verbal expression is key to understand sociability [18,19]. Some authors working with virtual agents and computer graphics have obtained impressive realistic animations of human characters.…”
Section: Emotion Expression In Robotsmentioning
confidence: 99%
“…In computer science and, especially, affective computing, the automatic measurement of humour has attracted increasing research interest in recent years, e. g. [11,12,13]. In particular, humour has been identified as important in human-computer interaction [14,15]. Because of the ubiquity of humour 1 Lukas Christ, Shahin Amiriparian, Alexander Kathan, and Björn Schuller are with the Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany.…”
Section: Introductionmentioning
confidence: 99%