2019
DOI: 10.48550/arxiv.1909.12268
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Relationship Explainable Multi-objective Reinforcement Learning with Semantic Explainability Generation

Huixin Zhan,
Yongcan Cao

Abstract: Solving multi-objective optimization problems is important in various applications where users are interested in obtaining optimal policies subject to multiple, yet often conflicting objectives. A typical approach to obtain optimal policies is to first construct a loss function that is based on the scalarization of individual objectives, and then find the optimal policy that minimizes the loss. However, optimizing the scalarized (and weighted) loss does not necessarily provide guarantee of high performance on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
(28 reference statements)
0
2
0
Order By: Relevance
“…As well as a growing interest in safe AI, recent years have also seen an increasing focus on the issues of explainability and interpretability of autonomous systems, as these factors are important for building trust with human users, and in ensuring transparency and lack of bias. It has been argued that a reward which has been decomposed from a scalar into its component terms provides benefits from the perspective of explaining decisions [72], and so several recent papers have explored multi-objective approaches to explainable and interpretable RL agents [26,28,29,111,211].…”
Section: Human-aligned Agentsmentioning
confidence: 99%
“…As well as a growing interest in safe AI, recent years have also seen an increasing focus on the issues of explainability and interpretability of autonomous systems, as these factors are important for building trust with human users, and in ensuring transparency and lack of bias. It has been argued that a reward which has been decomposed from a scalar into its component terms provides benefits from the perspective of explaining decisions [72], and so several recent papers have explored multi-objective approaches to explainable and interpretable RL agents [26,28,29,111,211].…”
Section: Human-aligned Agentsmentioning
confidence: 99%
“…As well as a growing interest in safe AI, recent years have also seen an increasing focus on the issues of explainability and interpretability of autonomous systems, as these factors are important for building trust with human users, and in ensuring transparency and lack of bias. It has been argued that a reward which has been decomposed from a scalar into its component terms provides benefits from the perspective of explaining decisions [Juozapaitis et al, 2019], and so several recent papers have explored multi-objective approaches to explainable and interpretable RL agents [Noothigattu et al, 2018, Zhan and Cao, 2019, Cruz et al, 2019.…”
Section: Human-aligned Agentsmentioning
confidence: 99%