2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) 2018
DOI: 10.1109/fuzz-ieee.2018.8491538
|View full text |Cite
|
Sign up to set email alerts
|

Learning Fuzzy Relations and Properties for Explainable Artificial Intelligence

Abstract: The goal of explainable artificial intelligence is to solve problems in a way that humans can understand how it does it. However, few approaches have been proposed so far and some of them lay more emphasis on interpretability than on explainability. In this paper, we propose an approach that is based on learning fuzzy relations and fuzzy properties. We extract frequent relations from a dataset to generate an explained decision. Our approach can deal with different problems, such as classification or annotation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…Four methods based on fuzzy reasoning to generate interpretable sets of rules that show the dependencies between inputs and outputs were presented in [107][108][109][110]. A multiobjective fuzzy Genetics-Based Machine Learning (GBML) algorithm [107] is implemented in the framework of evolutionary multiobjective optimisation (EMO) and consists of a hybrid version of the Michigan and Pittsburgh approaches.…”
Section: Model-specific Xai Methods Based On Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Four methods based on fuzzy reasoning to generate interpretable sets of rules that show the dependencies between inputs and outputs were presented in [107][108][109][110]. A multiobjective fuzzy Genetics-Based Machine Learning (GBML) algorithm [107] is implemented in the framework of evolutionary multiobjective optimisation (EMO) and consists of a hybrid version of the Michigan and Pittsburgh approaches.…”
Section: Model-specific Xai Methods Based On Neural Networkmentioning
confidence: 99%
“…Finally, it improves interpretability by using regularisation. The fourth method presented in [109] generates fuzzy rules by starting from a set of relations and properties, selected by an expert, of an input dataset. It then extracts the most relevant ones by employing a frequent itemset mining algorithm.…”
Section: Model-specific Xai Methods Based On Neural Networkmentioning
confidence: 99%
“…Argumentation-based approaches are believed to have a higher explainability as the notions of arguments and conflictuality are common to the way human reason. Four methods based on fuzzy reasoning to generate interpretable sets of rules that clearly show the dependencies between inputs and outputs were presented in [280,281,282,283]. Both [280,283] examine the interpretability-accuracy tradeoff in fuzzy rule-based classifiers.…”
Section: Model-specific Methods For Explainability Related To Rule-ba...mentioning
confidence: 99%
“…Last, it improves interpretability by using regularization. The method presented in [282] generates fuzzy rules by starting from a set of relations and properties, selected by an expert, of an input dataset. It then extracts the most relevant ones employing a frequent itemset mining algorithm.…”
Section: Model-specific Methods For Explainability Related To Rule-ba...mentioning
confidence: 99%
“…This major shortcoming in the interpretation of a CNN classification mechanism is originates from the black-box nature of such networks. This subject has been recently addressed in several works [16][17][18][19][20][21][22][23][24][25][26][27]. There have been several visualization tools and libraries developed for explaining deep Neural Networks [19,21,22].…”
Section: Explainabilitymentioning
confidence: 99%