2023
DOI: 10.1109/jbhi.2023.3236722
|View full text |Cite
|
Sign up to set email alerts
|

Characterization of Synthetic Health Data Using Rule-Based Artificial Intelligence Models

Abstract: The aim of this study is to apply and characterize eXplainable AI (XAI) to assess the quality of synthetic health data generated using a data augmentation algorithm. In this exploratory study, several synthetic datasets are generated using various configurations of a conditional Generative Adversarial Network (GAN) from a set of 156 observations related to adult hearing screening. A rule-based native XAI algorithm, the Logic Learning Machine, is used in combination with conventional utility metrics. The classi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 41 publications
0
0
0
Order By: Relevance
“…Explainable AI (XAI) techniques play a crucial role in ensuring the interpretability and transparency of AI systems, particularly when dealing with synthetic data. XAI methods enable users to understand the underlying mechanisms and decision-making processes of AI models, providing insights into the input-output relationships and the presence of biases 46 . In the context of healthcare, XAI techniques such as SHAP (SHapley Additive exPlanations) have been used to interpret the predictions made by machine learning models, ensuring transparency and accountability in decisionmaking 47 .…”
Section: Potential Pitfalls: Bias and Interpretabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Explainable AI (XAI) techniques play a crucial role in ensuring the interpretability and transparency of AI systems, particularly when dealing with synthetic data. XAI methods enable users to understand the underlying mechanisms and decision-making processes of AI models, providing insights into the input-output relationships and the presence of biases 46 . In the context of healthcare, XAI techniques such as SHAP (SHapley Additive exPlanations) have been used to interpret the predictions made by machine learning models, ensuring transparency and accountability in decisionmaking 47 .…”
Section: Potential Pitfalls: Bias and Interpretabilitymentioning
confidence: 99%
“…XAI methods allow users to scrutinize and understand the decisions made by AI systems, which is particularly crucial in domains where decision-making should be transparent, such as healthcare 47 . In the context of synthetic data, XAI techniques can help assess if the synthetic data maintains the desired inputoutput relationships similar to those found in real data 46 . By using XAI methods, it becomes possible to identify biases and assess the extent to which the synthetic data represents real-world scenarios.…”
Section: Potential Pitfalls: Bias and Interpretabilitymentioning
confidence: 99%