2021
DOI: 10.1101/2021.05.10.443415
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improved protein contact prediction using dimensional hybrid residual networks and singularity enhanced loss function

Abstract: Deep residual learning has shown great success in protein contact prediction. In this study, a new deep residual learning-based protein contact prediction model was developed. Comparing with previous models, a new type of residual block hybridizing 1D and 2D convolutions was designed to increase the effective receptive field of the residual network, and a new loss function emphasizing the easily misclassified residue pairs was proposed to enhance the model training. The developed protein contact prediction mod… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(10 citation statements)
references
References 34 publications
0
10
0
Order By: Relevance
“…Seven independent models were trained separately, and the final prediction was the average of the predictions from the seven models. We used the singularity enhanced loss function proposed in (Si and Yan, 2021) 20 to calculate training loss and optimized the loss with AdamW optimizer with its default settings. During training, if the loss on the validation set did not drop within two epochs, the learning rate would decay to 0.1 of its original (the initial learning rate was 0.001).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Seven independent models were trained separately, and the final prediction was the average of the predictions from the seven models. We used the singularity enhanced loss function proposed in (Si and Yan, 2021) 20 to calculate training loss and optimized the loss with AdamW optimizer with its default settings. During training, if the loss on the validation set did not drop within two epochs, the learning rate would decay to 0.1 of its original (the initial learning rate was 0.001).…”
Section: Resultsmentioning
confidence: 99%
“…The input features are then transformed by the dimensional hybrid residual network formed by dimensional hybrid residual blocks (1D2D blocks). A detailed description of the 1D2D block can be found in (Si and Yan, 2021) 20 . In this study, we increased the length of the 1D convolution kernels from 9 to 15, for we found using the longer 1D kernels slightly improved the model performance.…”
Section: Overview Of Drn-1d2d_inter 11 the Model Of Drn-1d2d_intermentioning
confidence: 99%
See 2 more Smart Citations
“…Notably, DCA methods generally exhibit two major limitations. First, DCA-based methods have poor performance when the number of homologous sequences is lower than approximately 50 [21] , [22] . Second, these methods extract only linear relationships between pairs of residues [21] .…”
Section: Introductionmentioning
confidence: 99%