Proceedings of the 1st Workshop on Explainable Computational Intelligence (XCI 2017) 2017
DOI: 10.18653/v1/w17-3703
|View full text |Cite
|
Sign up to set email alerts
|

Requirements for Conceptual Representations of Explanations and How Reasoning Systems Can Serve Them

Abstract: Explanations of solutions produced by reasoning systems in ever growing complexity become increasingly interesting, which is particularly challenging in view of fundamental differences between human and machine representation and problem-solving methods. In this paper, we formulate requirements for conceptual representations that are adequate for producing human-oriented explanations, and we discuss how some reasoning mechanisms can serve them or can possibly be adapted to do so. This examination is intended t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…Explaining the known gold answers for common-sense QA is an important research problem and is far from being solved (Rajani et al, 2019). Two major hurdles in solving this problem include (i) lack of any desiderata for what constitutes an explanation (Horacek, 2017) and (ii) unavailability of QA datasets comprising high quality human-annotated explanations.…”
Section: Cos Explanation (Rajani Et Al 2019)mentioning
confidence: 99%
See 2 more Smart Citations
“…Explaining the known gold answers for common-sense QA is an important research problem and is far from being solved (Rajani et al, 2019). Two major hurdles in solving this problem include (i) lack of any desiderata for what constitutes an explanation (Horacek, 2017) and (ii) unavailability of QA datasets comprising high quality human-annotated explanations.…”
Section: Cos Explanation (Rajani Et Al 2019)mentioning
confidence: 99%
“…For other QA tasks, such as common-sense QA, reading comprehension QA (RCQA), visual QA (VQA), grounding the definition of explanation is not so obvious (Horacek, 2017) and hence, they lack labeled data as well. In the case of RCQA and VQA (Ghosh et al, 2018), there have been attempts to explain the predicted answers.…”
Section: Free Flow Explanationmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast to Machine Ethics, Machine Explainability aims at equipping complex and autonomous systems with means to make their decisions understandable to different groups of addressees (cf. [1,9,25,26,31]) enabling a sufficient amount of transparency and perspicuity for these systems. Doing so becomes more and more urgent: For instance, the software doping cases that surfaced in the context of the VW diesel emissions scandals made obvious that the behavior of complex systems can be very hard -if not practically impossible -to comprehend even for experts (cf.…”
Section: Introductionmentioning
confidence: 99%