The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the 5th International Conference on Human Agent Interaction 2017
DOI: 10.1145/3125739.3125746
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Self-Explanation of Behavior for Interactive Reinforcement Learning Agents

Abstract: In cooperation, the workers must know how co-workers behave. However, an agent's policy, which is embedded in a statistical machine learning model, is hard to understand, and requires much time and knowledge to comprehend. Therefore, it is difficult for people to predict the behavior of machine learning robots, which makes Human Robot Cooperation challenging. In this paper, we propose Instruction-based Behavior Explanation (IBE), a method to explain an autonomous agent's future behavior. In IBE, an agent can a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 26 publications
(22 citation statements)
references
References 10 publications
0
22
0
Order By: Relevance
“…Policy explanations in human-agent interaction settings have been used to achieve transparency (Hayes and Shah 2017) and provide summaries of the policies (Amir and Amir 2018). Explanation in reinforcement learning has been explored, using interactive RL to generate explanations from instructions of a human (Fukuchi et al 2017) and to provide contrastive explanations (van der Waa et al 2018). Soft decision trees have been used to generate more interpretable policies (Coppens et al 2019), and reward decomposition has been utilized to provide minimum sufficient explanations in RL (Juozapaitis et al 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Policy explanations in human-agent interaction settings have been used to achieve transparency (Hayes and Shah 2017) and provide summaries of the policies (Amir and Amir 2018). Explanation in reinforcement learning has been explored, using interactive RL to generate explanations from instructions of a human (Fukuchi et al 2017) and to provide contrastive explanations (van der Waa et al 2018). Soft decision trees have been used to generate more interpretable policies (Coppens et al 2019), and reward decomposition has been utilized to provide minimum sufficient explanations in RL (Juozapaitis et al 2019).…”
Section: Related Workmentioning
confidence: 99%
“…The instructions are then re-used by the system to generate natural-language explanations. Further work by Fukuchi et al ( 2017b ) then expanded on this to a situation where an agent dynamically changed policy.…”
Section: Discussionmentioning
confidence: 99%
“…The instructions are then re-used by the system to generate natural-language explanations. Further work by Fukuchi et al (2017b) then expanded on this to a situation where an agent dynamically changed policy. Hayes and Shah (2017) used code annotations to give humanreadable labels to functions representing actions and variables representing state space, and then used a separate Markov Decision Process (MDP) to construct a model of the domain and policy of the control software itself.…”
Section: Policy Summarizationmentioning
confidence: 99%
“…One of the targets in the BToM research field is modeling a human observer who attributes mental states to an actor while watching the actor's behavior. In a typical problem setting, an observer can observe the whole environment, including the actor in the environment, and attributes mental states such as the actor's belief b 1 (9) where o 1 is an observation that the observer infers the actor observes at time t. The probability of each variable can be calculated using a forward algorithm [16]. The PublicSelf model is based on the BToM concept.…”
Section: Bayesian Modeling Of Theory Of Mindmentioning
confidence: 99%