2020
DOI: 10.1007/s43681-020-00001-8
|View full text |Cite
|
Sign up to set email alerts
|

Representation, justification, and explanation in a value-driven agent: an argumentation-based approach

Abstract: Ethical and explainable artificial intelligence is an interdisciplinary research area involving computer science, philosophy, logic, and social sciences, etc. For an ethical autonomous system, the ability to justify and explain its decision-making is a crucial aspect of transparency and trustworthiness. This paper takes a Value-Driven Agent (VDA) as an example, explicitly representing implicit knowledge of a machine learning-based autonomous agent and using this formalism to justify and explain the decisions o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(14 citation statements)
references
References 26 publications
0
13
0
Order By: Relevance
“…Social relations and Argumentation are also the topic of Liao et al . (2018), for action selection of a robotic platform.…”
Section: Argumentation and Explainabilitymentioning
confidence: 99%
“…Social relations and Argumentation are also the topic of Liao et al . (2018), for action selection of a robotic platform.…”
Section: Argumentation and Explainabilitymentioning
confidence: 99%
“…The "human-in-the-loop" approach leverages on human feedback during the training process to obtain more accurate classifiers [46]. A lot of work has also been done on argumentation and dialog games [6,11,51,53] but the focus in these areas is generally the logical structure of the framework to express and to relate arguments or the protocol to exchange arguments. Closer to the notion of justification, [43] relies on "debates" between two competing algorithms exchanging arguments and counterarguments to convince a human user that their classification is correct.…”
Section: Related Workmentioning
confidence: 99%
“…The breadth of different AMA design approaches reported in the literature reveals a lack of consensus among scholars working on the Machine Ethics project and raises questions about whether it is possible to develop an objective validation of AMAs that avoids designer bias and ensures explainability [22,42,53,64].…”
Section: About Design Of Amasmentioning
confidence: 99%