2018
DOI: 10.1007/978-3-319-98131-4
|View full text |Cite
|
Sign up to set email alerts
|

Explainable and Interpretable Models in Computer Vision and Machine Learning

Abstract: We introduce a new approach to functional causal modeling from observational data, called Causal Generative Neural Networks (CGNN). CGNN leverages the power of neural networks to learn a generative model of the joint distribution of the observed variables, by minimizing the Maximum Mean Discrepancy between generated and observed data. An approximate learning criterion is proposed to scale the computational cost of the approach to linear complexity in the number of observations. The performance of CGNN is studi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 58 publications
(5 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Vendors who develop AI/ML talent assessment products and the companies that use them should strive to maximize benefits while mitigating risks to both job applicants (e.g., data privacy) and the organization (e.g., legal repercussions). Regarding data privacy, the Global Data Protection Regulations (GDPR, 2016) in the European Union (EU) and European Economic Area (EEA) present challenges by requiring companies to disclose the use of AI/ML to applicants and remain transparent about the data used to make selection decisions (Liem et al, 2018). Similar legislation is emerging in the United States, such as in Illinois, where employers must obtain applicants' consent to use AI/ML in their hiring processes (Bologna, 2019).…”
Section: Increased Power To Handle Large Quantities Of Datamentioning
confidence: 99%
“…Vendors who develop AI/ML talent assessment products and the companies that use them should strive to maximize benefits while mitigating risks to both job applicants (e.g., data privacy) and the organization (e.g., legal repercussions). Regarding data privacy, the Global Data Protection Regulations (GDPR, 2016) in the European Union (EU) and European Economic Area (EEA) present challenges by requiring companies to disclose the use of AI/ML to applicants and remain transparent about the data used to make selection decisions (Liem et al, 2018). Similar legislation is emerging in the United States, such as in Illinois, where employers must obtain applicants' consent to use AI/ML in their hiring processes (Bologna, 2019).…”
Section: Increased Power To Handle Large Quantities Of Datamentioning
confidence: 99%
“…Symbolic approaches, on the other hand, are more easily understood while being generally seen as less effective. Additionally, it has been demonstrated that the introduction of counterfactual explanations might aid the user in comprehending a model’s conclusion [ 90 92 ].…”
Section: Discussionmentioning
confidence: 99%
“…Many of these algorithms, however, are unable to articulate to human users why they made certain decisions and took certain actions. Explanations are necessary for users to comprehend, have faith in, and manage these new artificially intelligent partners in the crucial knowledge domains of defense, medical, finance, and law for exemplo [74][75][76].…”
Section: The Role Of Explainable Artificial Intelligence In DL and Ml...mentioning
confidence: 99%
“…Numerous research studies have been conducted on the explainability and interpretability of black-box models. We refer to [8][9][10] for general surveys and only briefly recapitulate the most relevant works for our study. In particular, we are interested in comparing works in two major categories: directly learning interpretable methods and post-hoc explanation methods.…”
Section: Related Workmentioning
confidence: 99%