2021
DOI: 10.1007/978-3-030-71098-9_6
|View full text |Cite
|
Sign up to set email alerts
|

Design and Validation of an Explainable Fuzzy Beer Style Classifier

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
0
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 69 publications
0
0
0
Order By: Relevance
“…A usual strategy to generate explanations for decisions made by a black-box machine learning model, such as a deep learning model, is to build a surrogate model based on more expressive machine learning algorithms, such as the aforementioned decision rules [10,14], decision trees [12,1], or linear models [13]. The surrogate model is trained on the same data set as the black-box model to be explained or on new data points classified by that same model.…”
Section: Contribution and Plan Of This Papermentioning
confidence: 99%
“…A usual strategy to generate explanations for decisions made by a black-box machine learning model, such as a deep learning model, is to build a surrogate model based on more expressive machine learning algorithms, such as the aforementioned decision rules [10,14], decision trees [12,1], or linear models [13]. The surrogate model is trained on the same data set as the black-box model to be explained or on new data points classified by that same model.…”
Section: Contribution and Plan Of This Papermentioning
confidence: 99%
“…This can be general ("when do you do X") or specific to a given action. An example of a specific explanation is a natural language explanation for a classification in the ML space (Alonso et al, 2018) or the generation of a summary of "when do you do X?" type questions in natural language to explain the actions of an agent (Hayes & Shah, 2017).…”
Section: Interpretable/explainable Decision-making Processes Of Rlmentioning
confidence: 99%
“…We refer to examples that codify the decision process of the model as rules (Liu et al, 2018), as code blocks (Verma et al, 2018), or through natural language. Alonso et al (2018) shows an example of justifying classifications with a textual explanation of the choice made by a decision tree, which could in turn be transferred to RL applications. Other examples for providing an introspection into LLMs with textual explanations are Zini and Awad (2022) and Xu et al (2023).…”
Section: Approach: Explainabilitymentioning
confidence: 99%