2021
DOI: 10.48550/arxiv.2105.10172
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explainable Machine Learning with Prior Knowledge: An Overview

Katharina Beckh,
Sebastian Müller,
Matthias Jakobs
et al.

Abstract: This survey presents an overview of integrating prior knowledge into machine learning systems in order to improve explainability. The complexity of machine learning models has elicited research to make them more explainable. However, most explainability methods cannot provide insight beyond the given data, requiring additional information about the context. We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models. In this paper, we present a categorization o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 55 publications
0
11
0
Order By: Relevance
“…We do not select highly correlated parameters such as the pore volume and surface area of different pore sizes because they do not change independently. Second, we implement the robust estimation of the parameters in Gaussian processes, utilizing the jointly robust prior function and marginal posterior mode estimation ,, and constructing the group automatic relevance determination (gARD) kernel in order to produce meaningful and accurate prediction by solving problems ordinary GPR methods face. The performance of PhysGPR is significantly improved in these ways compared to that of the original PhysGPR used on pristine carbon in our previous work .…”
Section: Introductionmentioning
confidence: 84%
See 1 more Smart Citation
“…We do not select highly correlated parameters such as the pore volume and surface area of different pore sizes because they do not change independently. Second, we implement the robust estimation of the parameters in Gaussian processes, utilizing the jointly robust prior function and marginal posterior mode estimation ,, and constructing the group automatic relevance determination (gARD) kernel in order to produce meaningful and accurate prediction by solving problems ordinary GPR methods face. The performance of PhysGPR is significantly improved in these ways compared to that of the original PhysGPR used on pristine carbon in our previous work .…”
Section: Introductionmentioning
confidence: 84%
“…While data-driven methods are able to make valuable predictions of supercapacitance performance, their pitfalls have also been well recognized, such as low robustness, challenges in interpretability, and the lack of reliable uncertainty assessment, especially in extrapolation beyond the training data . Integrating physics-based constraints and relations as prior knowledge into the ML models can significantly enhance the interpretability of ML methods . To overcome these pitfalls, we propose in this work a physics-informed Gaussian process regression (GPR) (PhysGPR) model for predicting the in operando capacitance of aqueous supercapacitors based on the properties of nitrogen- and/or oxygen-doped carbon electrodes.…”
Section: Introductionmentioning
confidence: 99%
“…The complexity of contemporary machine learning models poses challenges for human comprehension of the precise decision-making process during inference [18]. By harnessing the domain knowledge accumulated by scholars, it is possible to strengthen models' interpretability and robustness while reducing data requirements [19].…”
Section: Informed Machine Learningmentioning
confidence: 99%
“…Learning from explanations. Recent work has sought to collect datasets of human-annotated explanations, often in the form of binary rationales, in addition to class labels (DeYoung et al, 2019;Wiegreffe and Marasović, 2021), and to use these explanations as additional training signals to improve model performance and robustness, sometimes also known as feature-level feedback (Hase and Bansal, 2021;Beckh et al, 2021).…”
Section: Related Workmentioning
confidence: 99%