2022
DOI: 10.3390/electronics11121917
|View full text |Cite
|
Sign up to set email alerts
|

An Effective Deep Learning-Based Architecture for Prediction of N7-Methylguanosine Sites in Health Systems

Abstract: N7-methylguanosine (m7G) is one of the most important epigenetic modifications found in rRNA, mRNA, and tRNA, and performs a promising role in gene expression regulation. Owing to its significance, well-equipped traditional laboratory-based techniques have been performed for the identification of N7-methylguanosine (m7G). Consequently, these approaches were found to be time-consuming and cost-ineffective. To move on from these traditional approaches to predict N7-methylguanosine sites with high precision, the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 63 publications
0
1
0
Order By: Relevance
“…It consists of multiple convolutional units, and their parameters are optimized through the backpropagation process. While, as an activation function for the convolution layer, the Rectified Linear Unit (ReLU) is utilized, which is widely adopted in deep learning architectures due to its ability to introduce non-linearity and alleviate the vanishing gradient problem[53]. After the convolution layer, a group normalization layer is applied, which serves as an effective alternative to batch normalization, especially when dealing with small batch sizes, as it normalizes the activations within each group, promoting stable and efficient training[54].…”
mentioning
confidence: 99%
“…It consists of multiple convolutional units, and their parameters are optimized through the backpropagation process. While, as an activation function for the convolution layer, the Rectified Linear Unit (ReLU) is utilized, which is widely adopted in deep learning architectures due to its ability to introduce non-linearity and alleviate the vanishing gradient problem[53]. After the convolution layer, a group normalization layer is applied, which serves as an effective alternative to batch normalization, especially when dealing with small batch sizes, as it normalizes the activations within each group, promoting stable and efficient training[54].…”
mentioning
confidence: 99%