2001
DOI: 10.1016/s0014-5793(01)02910-6
|View full text |Cite
|
Sign up to set email alerts
|

Protein secondary structure: category assignment and predictability

Abstract: In the last decade, the prediction of protein secondary structure has been optimized using essentially one and the same assignment scheme known as DSSP. We present here a different scheme, which is more predictable. This scheme predicts directly the hydrogen bonds, which stabilize the secondary structures. Single sequence prediction of the new three category assignment gives an overall prediction improvement of 3.1% and 5.1% compared to the DSSP assignment and schemes where the helix category consists of K K-h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2002
2002
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…The architecture of our DeepCNF model is mainly determined by the following 3 factors (see Figure 2): (i) the number of hidden layers; (ii) the number of different neurons at each layer; and (iii) the window size at each layer. We fix the window size to 11 because the average length of an alpha helix is around eleven residues 58 and that of a beta strand is around six 59 . To show the relationship between the performance and the number of hidden layers, we trained four different DeepCNF models with 1, 3, 5, and 7 layers, respectively.…”
Section: Datasetmentioning
confidence: 99%
“…The architecture of our DeepCNF model is mainly determined by the following 3 factors (see Figure 2): (i) the number of hidden layers; (ii) the number of different neurons at each layer; and (iii) the window size at each layer. We fix the window size to 11 because the average length of an alpha helix is around eleven residues 58 and that of a beta strand is around six 59 . To show the relationship between the performance and the number of hidden layers, we trained four different DeepCNF models with 1, 3, 5, and 7 layers, respectively.…”
Section: Datasetmentioning
confidence: 99%
“…For the CNN-based prediction model, we use five CNN layers (Wang et al, 2016). Since the average length of an alpha helix is around 11 residues (Andersen et al, 2001) and that of a beta strand is around 6 (Penel et al, 2003), we fix the window size to 11. For the LSTM-based networks, we apply two stacked bidirectional LSTM neural networks (Sønderby and Winther, 2014) and a FC layer.…”
Section: Prediction Networkmentioning
confidence: 99%
“…It plays an important role in constructing sequence4 and structure5 alignments. Accurate and consistent assignment of secondary structure defines the quality of a training set for secondary structure prediction algorithms, and it is necessary to develop such algorithms with maximum predictive power 6…”
Section: Introductionmentioning
confidence: 99%