2023
DOI: 10.48550/arxiv.2302.06975
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Review of the Role of Causality in Developing Trustworthy AI Systems

Abstract: State-of-the-art AI models largely lack an understanding of the cause-effect relationship that governs human understanding of the real world. Consequently, these models do not generalize to unseen data, often produce unfair results, and are difficult to interpret. This has led to efforts to improve the trustworthiness aspects of AI models. Recently, causal modeling and inference methods have emerged as powerful tools. This review aims to provide the reader with an overview of causal methods that have been deve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 186 publications
(361 reference statements)
0
4
0
Order By: Relevance
“…In diverse applications, neurosymbolic AI and GNNs play a crucial role. In drug discovery, they combine expert knowledge with GNN-driven insights to accelerate the identification of potential drug candidates [27]. In healthcare, these models enhance the interpretability of medical decision-making by providing explanations for diagnoses through the integration of symbolic medical knowledge with learned patterns from patient data [28].…”
Section: Neurosymbolic Ai and Gnnsmentioning
confidence: 99%
“…In diverse applications, neurosymbolic AI and GNNs play a crucial role. In drug discovery, they combine expert knowledge with GNN-driven insights to accelerate the identification of potential drug candidates [27]. In healthcare, these models enhance the interpretability of medical decision-making by providing explanations for diagnoses through the integration of symbolic medical knowledge with learned patterns from patient data [28].…”
Section: Neurosymbolic Ai and Gnnsmentioning
confidence: 99%
“…Posthoc Suitable to explain an already deployed model [11][12][13][14]23,32,33,[48][49][50] Antehoc Suitable when an application specifies the need to build models that have interpretability built into its design [15][16][17]22,38,[51][52][53][54] Explanation Scope…”
Section: Incorporation Stagementioning
confidence: 99%
“…Many such cause-effect relationships exist in nature. It is of interest to the research community to see if the machine learning models capture such causal relationships [11][12][13][14] and design models which work based on causal relationships so that the spurious correlations [34] are not picked up to arrive at the prediction [15][16][17].…”
Section: Causal Explanationsmentioning
confidence: 99%
See 1 more Smart Citation