2023
DOI: 10.1016/j.nbt.2023.06.002
|View full text |Cite
|
Sign up to set email alerts
|

The promise of explainable deep learning for omics data analysis: Adding new discovery tools to AI

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 102 publications
0
4
0
Order By: Relevance
“…Understanding genotype-phenotype relationships: AI can assist in unravelling the complex relationship between a plant’s genotype and its phenotype in response to environmental conditions. Especially xAI can identify genetic variants that contribute to these traits, particularly those that have non-linear interactions - something that GWAS cannot do ( Santorsola and Lescai, 2023 ). Feed-forward NNs go beyond association testing and can use several individuals with many SNPs to predict traits with an acceptable performance ( Sharma et al., 2020 ).…”
Section: Accelerating Plant Breeding Processes With Explainable Aimentioning
confidence: 99%
“…Understanding genotype-phenotype relationships: AI can assist in unravelling the complex relationship between a plant’s genotype and its phenotype in response to environmental conditions. Especially xAI can identify genetic variants that contribute to these traits, particularly those that have non-linear interactions - something that GWAS cannot do ( Santorsola and Lescai, 2023 ). Feed-forward NNs go beyond association testing and can use several individuals with many SNPs to predict traits with an acceptable performance ( Sharma et al., 2020 ).…”
Section: Accelerating Plant Breeding Processes With Explainable Aimentioning
confidence: 99%
“…Moreover, a major aspect in which ML models differ from, and possibly surpass, classic GS models, is their higher explainability. Explainability, defined as the ability to explain how a model works and makes predictions even after it has been trained ( Santorsola and Lescai, 2023 ), can provide information on the genomic sequences that contribute to the observed phenotypic variations. This is relevant for breeding programs and for the overall understanding of biological processes ( Danilevicz et al., 2022 ).…”
Section: Innovative Approaches To Accelerate Varietal Selection In Gr...mentioning
confidence: 99%
“…b. For each feature ‫ݔ‬ , computed the mean SHAP value for CN samples as equation (3), where ܰ ே represents the total number of CN samples in the dataset.…”
Section: Model Interpretationmentioning
confidence: 99%
“…One of the challenges faced by the application of AI approaches to multi-omic data is lack of interpretability. Complex ML models, like Deep Neural Networks (DNNs), although with unparalleled predictive power, are often considered as "black box" models as their decisionmaking processes are not easily inspected by human investigators [3]. Existing literature has reported the use of different AI frameworks to uncover deep interrelationships between gene expression and AD neuropathologies [4,5].…”
Section: Introductionmentioning
confidence: 99%