2020
DOI: 10.1089/cmb.2019.0320
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Deep Learning for Augmentation of Small RNA Expression Profiles

Abstract: The lack of well-structured metadata annotations complicates the re-usability and interpretation of the growing amount of publicly available RNA expression data. The machine learning-based prediction of metadata (data augmentation) can considerably improve the quality of expression data annotation. In this study, we systematically benchmark deep learning (DL) and random forest (RF)-based metadata augmentation of tissue, age, and sex using small RNA (sRNA) expression profiles. We use 4243 annotated sRNA-Seq sam… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…These studies try to resolve the issue by calculating the gradient of the input image with respect to the output label and highlighting the target pixel as a recognition target when a slight change in a specific input pixel causes a large change in the output label. However, a simple calculation of the gradient generates a noisy highlight, so some improved methods have been proposed for sharpening [54][55][56][57][58][59]. In addition, in the DeepSnap-DL approach, the performance improves as data size increases, and performance deterioration is observed with insufficient data size or the presence of noise.…”
Section: Resultsmentioning
confidence: 99%
“…These studies try to resolve the issue by calculating the gradient of the input image with respect to the output label and highlighting the target pixel as a recognition target when a slight change in a specific input pixel causes a large change in the output label. However, a simple calculation of the gradient generates a noisy highlight, so some improved methods have been proposed for sharpening [54][55][56][57][58][59]. In addition, in the DeepSnap-DL approach, the performance improves as data size increases, and performance deterioration is observed with insufficient data size or the presence of noise.…”
Section: Resultsmentioning
confidence: 99%
“…Aiming to interpret models with biological meanings, we applied a published package named DeepLIFT which was previously used in biology field to interpret the models [ 6 , 15 , 17 , 18 ]. We selected all true positive sequences (genes were correctly predicted as expressed) in three stages for interpretation.…”
Section: Resultsmentioning
confidence: 99%
“…For static analysis (e.g., therapy recommendation based on a fixed set of features), fully connected neural networks are typically utilized for modeling and thus are the target to be interpreted. Commonly used interpretability methods include DeepLIFT (Fiosina et al, 2020), LRP (Li et al, 2018; Zihni et al, 2020), and so on. For time‐series analysis, besides being able to analyze which features are more important or relevant to the prediction among all features used (Yang et al, 2018), it is noteworthy that we can also analyze what temporal patterns are more influential to the final model decision (Mayampurath et al, 2019; Suresh et al, 2017).…”
Section: Methods For Interpretability In Healthcarementioning
confidence: 99%