2006
DOI: 10.1021/ci0504216
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking of Linear and Nonlinear Approaches for Quantitative Structure−Property Relationship Studies of Metal Complexation with Ionophores

Abstract: A benchmark of several popular methods, Associative Neural Networks (ANN), Support Vector Machines (SVM), k Nearest Neighbors (kNN), Maximal Margin Linear Programming (MMLP), Radial Basis Function Neural Network (RBFNN), and Multiple Linear Regression (MLR), is reported for quantitative-structure property relationships (QSPR) of stability constants logK1 for the 1:1 (M:L) and logbeta2 for 1:2 complexes of metal cations Ag+ and Eu3+ with diverse sets of organic molecules in water at 298 K and ionic strength 0.1… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
83
1

Year Published

2007
2007
2017
2017

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 66 publications
(85 citation statements)
references
References 35 publications
(78 reference statements)
1
83
1
Order By: Relevance
“…). "Quorum Control" guided QSPR models perform better than previously reported models [23,39,40,52] for all studied metal ions. In whole, RMSE values of predictions obtained in this work are twice lower than those for the earlier reported models [23,39,40,52] (Figure 4) and they are close to experimental systematic errors (see Table 1).…”
Section: Models Applicability Domain Definitionsmentioning
confidence: 56%
See 4 more Smart Citations
“…). "Quorum Control" guided QSPR models perform better than previously reported models [23,39,40,52] for all studied metal ions. In whole, RMSE values of predictions obtained in this work are twice lower than those for the earlier reported models [23,39,40,52] (Figure 4) and they are close to experimental systematic errors (see Table 1).…”
Section: Models Applicability Domain Definitionsmentioning
confidence: 56%
“…[40,74] In this procedure, an entire dataset is divided in 5 non-overlapping pairs of training and test sets. Predictions are prepared for all molecules (n) of the initial dataset, since each of them belongs to one of the test sets.…”
Section: P Solovev Et Almentioning
confidence: 99%
See 3 more Smart Citations