2018
DOI: 10.1093/bioinformatics/bty760
|View full text |Cite
|
Sign up to set email alerts
|

Comparative analysis of methods for evaluation of protein models against native structures

Abstract: Supplementary data are available at Bioinformatics online.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
50
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
6

Relationship

4
2

Authors

Journals

citations
Cited by 40 publications
(52 citation statements)
references
References 25 publications
1
50
0
1
Order By: Relevance
“…This is the approach generally taken in CASP assessment, and the ability of the group to rank their models forms an implicit part of the ranking score. Any ranking score that assigns comparable weights to a combination of metrics measuring global fold, local fold, and estimated model accuracy is likely to lead to a similar overall ranking, as the metrics within these general categories tend to be highly correlated to one another . Because the ASE accuracy self‐estimate score measures an orthogonal characteristic of the models (and to assess the possibility that a good ASE score could be attained by assigning large errors to poor models), we also tested the effect on ranking of excluding the ASE measure.…”
Section: Resultssupporting
confidence: 80%
See 1 more Smart Citation
“…This is the approach generally taken in CASP assessment, and the ability of the group to rank their models forms an implicit part of the ranking score. Any ranking score that assigns comparable weights to a combination of metrics measuring global fold, local fold, and estimated model accuracy is likely to lead to a similar overall ranking, as the metrics within these general categories tend to be highly correlated to one another . Because the ASE accuracy self‐estimate score measures an orthogonal characteristic of the models (and to assess the possibility that a good ASE score could be attained by assigning large errors to poor models), we also tested the effect on ranking of excluding the ASE measure.…”
Section: Resultssupporting
confidence: 80%
“…Over the years, a large number of evaluation measures have been developed to assess different aspects of model quality. A detailed description, classification and review of a number of these metrics has been published recently; they differ for instance in whether or not they depend on structure superposition and whether they depend on global or local measures. Most of these metrics are computed, collated, and analyzed by the Prediction Center (http://predictioncenter.org), making them much more convenient for assessors and others.…”
Section: Methodsmentioning
confidence: 99%
“…To assess accuracy of crosslinking‐assisted models and their improvement over the corresponding non‐assisted predictions, we employed the GDT_TS measure for monomeric predictions, and the LDDT measure for multimeric ones. Comparative analysis of these measures is provided in a recently published paper …”
Section: Methodsmentioning
confidence: 99%
“…In general, the VoroMQA‐based grouping is fairly similar to that based on CAD‐score. This is quite remarkable considering that lDDT and CAD‐score are some of the most similarly behaved and highly correlated reference‐based scores …”
Section: Discussionmentioning
confidence: 85%