2020
DOI: 10.48550/arxiv.2009.13649
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The EMPATHIC Framework for Task Learning from Implicit Human Feedback

Abstract: Reactions such as gestures, facial expressions, and vocalizations are an abundant, naturally occurring channel of information that humans provide during interactions. A robot or other agent could leverage an understanding of such implicit human feedback to improve its task performance at no cost to the human. This approach contrasts with common agent teaching methods based on demonstrations, critiques, or other guidance that need to be attentively and intentionally provided. In this paper, we first define the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 44 publications
(66 reference statements)
0
5
0
Order By: Relevance
“…For fluency, however, new roles of interaction are proposed and discussed, which would also require new kinds of interface. An agent could integrate implicit empathic feedback from a human in the form of gestures, vocalizations, and facial expressions as shown by Cui et al (2020), allowing a more intuitive interaction and richer feedback from the human to the learning agent. To effectively leverage such feedback, appropriate user interfaces should be developed, and the underlying model should be able to process multimodal data such as speech or image.…”
Section: Focus: Interaction Designmentioning
confidence: 99%
“…For fluency, however, new roles of interaction are proposed and discussed, which would also require new kinds of interface. An agent could integrate implicit empathic feedback from a human in the form of gestures, vocalizations, and facial expressions as shown by Cui et al (2020), allowing a more intuitive interaction and richer feedback from the human to the learning agent. To effectively leverage such feedback, appropriate user interfaces should be developed, and the underlying model should be able to process multimodal data such as speech or image.…”
Section: Focus: Interaction Designmentioning
confidence: 99%
“…In [25] human feedback is used to remove bias of extracted skills from offline datasets sets and produce more human-aligned skills. Human feedback can also take more subtle forms, such as implicitly from facial features to learn reward rankings [26]. Human-robot collaborative manipulation policies can also be learned from datasets of human-human collaboration, such as learning handover tasks from conversations to obtain diverse strategies of human-robot collaboration [27], or to improve backchanneling behaviours [28] for social robots from behaviours such as nodding.…”
Section: Related Workmentioning
confidence: 99%
“…The human subjects then provide input to some algorithm that has no knowledge of the performance metric, and this algorithm or learned model is evaluated on how well its output performs with respect to the hidden metric. For another example, see Cui et al [27].…”
Section: D4 the Study Design Patternmentioning
confidence: 99%