2021
DOI: 10.1109/taffc.2019.2893348
|View full text |Cite
|
Sign up to set email alerts
|

Towards Transparent Robot Learning Through TDRL-Based Emotional Expressions

Abstract: Robots and virtual agents need to adapt existing and learn novel behavior to function autonomously in our society. Robot learning is often in interaction with or in the vicinity of humans. As a result the learning process needs to be transparent to humans. Reinforcement Learning (RL) has been used successfully for robot task learning. However, this learning process is often not transparent to the users. This results in a lack of understanding of what the robot is trying to do and why. The lack of transparency … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

4
5

Authors

Journals

citations
Cited by 26 publications
(24 citation statements)
references
References 67 publications
0
18
0
Order By: Relevance
“…As mentioned by Broekens and Chetouani [10], most of computational approaches for social agents consider a primary task, e.g., learning to pick an object, and explainability arises as a secondary task by either communicating the agent's internal states, intentions, or future goals. Given the literature, it is possible to distinguish the nature of actions performed by the agent such as task oriented actions a T and communication oriented actions a C .…”
Section: Methods To Achieve Explainabilitymentioning
confidence: 99%

Explainable Agents Through Social Cues: A Review

Wallkotter,
Tulli,
Castellano
et al. 2020
Preprint
Self Cite
“…As mentioned by Broekens and Chetouani [10], most of computational approaches for social agents consider a primary task, e.g., learning to pick an object, and explainability arises as a secondary task by either communicating the agent's internal states, intentions, or future goals. Given the literature, it is possible to distinguish the nature of actions performed by the agent such as task oriented actions a T and communication oriented actions a C .…”
Section: Methods To Achieve Explainabilitymentioning
confidence: 99%

Explainable Agents Through Social Cues: A Review

Wallkotter,
Tulli,
Castellano
et al. 2020
Preprint
Self Cite
“…As mentioned by Broekens and Chetouani [11], most of computational approaches for social agents consider a primary task, such as learning to pick an object, and explainability arises as a secondary task by either communicating the agent's internal states, intentions, or future goals. Existing works distinguish the nature of actions performed by the agent, such as task-oriented actions a T and communication-oriented actions a C .…”
Section: Explainability Mechanismsmentioning
confidence: 99%
“…In RL, theoretical links between (task) learning schemes and emotional theories could be performed. Broekens and Chetouani [11] investigated how temporal difference learning [70] could be employed to develop an emotionally expressive learning robot that is capable of generating explainable behaviors via emotions.…”
Section: Explainability Mechanismsmentioning
confidence: 99%
“…However, developing computational models that are able to adapt on-line to non-verbal cues is a current challenge in machine learning. The few attempts to exploit non-verbal cues for adaptation and learning require the prior definition of a communication protocol (i.e., meaning of non-verbal cues) between the human and the machine (Broekens and Chetouani, 2019 ). Training adaptive machine learning with non-verbal cues are facilitated by the prior categorization of limited discrete signals such as pointing or stop hand gesture (Najar et al, 2016 ).…”
Section: Open Questionsmentioning
confidence: 99%