2020
DOI: 10.1101/2020.07.28.224212
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multinomial Convolutions for Joint Modeling of Sequence Motifs and Enhancer Activities

Abstract: Massively parallel reporter assays (MPRAs) have enabled the study of transcriptional regulatory mechanisms at an unprecedented scale and with high quantitative resolution. However, this realm lacks models that can discover sequence-specific signals de novo from the data and integrate them in a mechanistic way. We present MuSeAM (Multinomial CNNs for Sequence Activity Modeling), a convolutional neural network that overcomes this gap. MuSeAM utilizes multinomial convolutions that directly model sequence-specific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

4
0

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 59 publications
0
6
0
Order By: Relevance
“…The rst two layers are applied parallelly to both the forward and the reverse complement complement strand of the input sequence, allowing the model to detect patterns in both DNA strands. The rst layer is a multinomial convolution layer with ReLU activation 10 . Filters in this layer detect motifs and the convolution operation computes matching scores of the motifs (log likelihood ratios, see Methods) at each position of the input sequence.…”
Section: Deepbend: a Deep Convolutional Neural Network (Cnn) Model Of...mentioning
confidence: 99%
“…The rst two layers are applied parallelly to both the forward and the reverse complement complement strand of the input sequence, allowing the model to detect patterns in both DNA strands. The rst layer is a multinomial convolution layer with ReLU activation 10 . Filters in this layer detect motifs and the convolution operation computes matching scores of the motifs (log likelihood ratios, see Methods) at each position of the input sequence.…”
Section: Deepbend: a Deep Convolutional Neural Network (Cnn) Model Of...mentioning
confidence: 99%
“…Interpreting the DeepBend model is straight-forward. Since each row of a first layer filter is a multinomial distribution over the four nucleotides, these filters are directly interpretable as biophysical models of sequence motifs 10 . Regularising variance in the last layer separates out the relative spatial patterns of motifs significant for different ranges of bendability into different filters of the second convolution layer.…”
Section: Resultsmentioning
confidence: 99%
“…The first layer is the custom 1D convolution from the MuSeAM model 10 that learns the multinomial distribution of motif patterns. In this multinomial convolutional layer, 256 filters of size 8×4 have been used with “same” padding and stride 1.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to build CNN models that are both accurate and easy-to-interpret, we used the recently proposed MuSeAM model (Fig 1A). MuSeAM is a custom CNN model shown to be more interpretable than conventional CNN for sequence analysis [20]. We used the model for the prediction of DNA binding specificity of EBNA2 and EBNA3 vTFs.…”
mentioning
confidence: 99%