2021
DOI: 10.1093/bib/bbab233
|View full text |Cite
|
Sign up to set email alerts
|

Identifying complex motifs in massive omics data with a variable-convolutional layer in deep neural network

Abstract: Motif identification is among the most common and essential computational tasks for bioinformatics and genomics. Here we proposed a novel convolutional layer for deep neural network, named variable convolutional (vConv) layer, for effective motif identification in high-throughput omics data by learning kernel length from data adaptively. Empirical evaluations on DNA-protein binding and DNase footprinting cases well demonstrated that vConv-based networks have superior performance to their convolutional counterp… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 47 publications
0
1
0
Order By: Relevance
“…The indexes of the optimized objective function are data rate, signal-to-interference-noise ratio (SINR), power consumption, and energy efficiency, and then the optimized objective function is allocated to a neural network for resource allocation [ 2 ]. It expounds a new deep neural network convolution layer-variable convolution (vConv) layer, which learns the kernel length of data adaptively by its own cycle to realize the motif recognition of data sets with high throughput [ 3 ]. It proves the effectiveness of pretraining neural networks on different data sets and shows that in many practical cases, the convolution layer can be replaced by a smaller fully connected layer, and the accuracy degradation is relatively small [ 4 ].…”
Section: Introductionmentioning
confidence: 99%
“…The indexes of the optimized objective function are data rate, signal-to-interference-noise ratio (SINR), power consumption, and energy efficiency, and then the optimized objective function is allocated to a neural network for resource allocation [ 2 ]. It expounds a new deep neural network convolution layer-variable convolution (vConv) layer, which learns the kernel length of data adaptively by its own cycle to realize the motif recognition of data sets with high throughput [ 3 ]. It proves the effectiveness of pretraining neural networks on different data sets and shows that in many practical cases, the convolution layer can be replaced by a smaller fully connected layer, and the accuracy degradation is relatively small [ 4 ].…”
Section: Introductionmentioning
confidence: 99%