2018
DOI: 10.1093/bioinformatics/bty340
|View full text |Cite
|
Sign up to set email alerts
|

Self-consistency test reveals systematic bias in programs for prediction change of stability upon mutation

Abstract: MotivationComputational prediction of the effect of mutations on protein stability is used by researchers in many fields. The utility of the prediction methods is affected by their accuracy and bias. Bias, a systematic shift of the predicted change of stability, has been noted as an issue for several methods, but has not been investigated systematically. Presence of the bias may lead to misleading results especially when exploring the effects of combination of different mutations.ResultsHere we use a protocol … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
81
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 76 publications
(81 citation statements)
references
References 30 publications
0
81
0
Order By: Relevance
“…The training data sets available so far with experimentally determined protein stability changes are enriched with destabilizing mutations (16,48). Thus, the vast majority of predictors that did not consider the unbalance of training dataset showed a better performance for predicting destabilizing than stabilizing mutations (49,50). A recent study constructed a balanced data set with an equal number of destabilizing and stabilizing mutations and was used to assess the performance of 14 methods (51).…”
Section: Introductionmentioning
confidence: 99%
“…The training data sets available so far with experimentally determined protein stability changes are enriched with destabilizing mutations (16,48). Thus, the vast majority of predictors that did not consider the unbalance of training dataset showed a better performance for predicting destabilizing than stabilizing mutations (49,50). A recent study constructed a balanced data set with an equal number of destabilizing and stabilizing mutations and was used to assess the performance of 14 methods (51).…”
Section: Introductionmentioning
confidence: 99%
“…In contrast, FoldX predicted that 83.2% of pathogenic variants are destabilizing. As already demonstrated in previous studies, the bias is likely because FoldX was parameterized on an experimental ∆∆G data set dominated by destabilizing mutations (Pucci et al, 2018;Thiltgen and Goldstein, 2012;Usmanova et al, 2018).…”
Section: ∆∆G Landscape Of Clinvar Missense Variantsmentioning
confidence: 89%
“…A well-performing, "self-consistent" method for predicting ∆∆ s would not only give accurate ∆∆ predictions for the direct mutations, but also for the reverse mutations. The self-consistency requirement has been largely ignored by previously developed ∆∆ predictors (Pucci et al, 2018;Thiltgen and Goldstein, 2012;Usmanova et al, 2018).…”
Section: Thermodynamics Of Direct and Reverse Mutationsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, mutations within each subunit of the RNAP complex, and primarily the rifampin binding  subunit, have clinical implications and influence rifampin-resistance outcomes in mycobacterial diseases (Comas et al, 2012). The performance of various structural, sequence and NMA based predictors for predicting protein stability changes upon mutations vary largely in terms of their accuracy and bias (Usmanova et al, 2018), but offer a quick and a helpful alternative to understanding the association between mutations and resistance phenotypes (Pires et al, 2016b).…”
Section: Discussionmentioning
confidence: 99%