2015
DOI: 10.1016/j.aca.2014.12.056
|View full text |Cite
|
Sign up to set email alerts
|

Sum of ranking differences (SRD) to ensemble multivariate calibration model merits for tuning parameter selection and comparing calibration methods

Abstract: and variance ultimately decides the merits used in SRD and hence, the tuning parameter values ranked lowest by SRD for automatic selection. The SRD process is also shown to allow simultaneous comparison of different calibration methods for a particular data set in conjunction with tuning parameter selection. Because SRD evaluates consistency across multiple merits, decisions on how to combine and weight merits are avoided. To demonstrate the utility of SRD, a near infrared spectral data set and a quantitative … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
49
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 42 publications
(51 citation statements)
references
References 59 publications
0
49
0
Order By: Relevance
“…We select as our solution the minimum value of the mean normalized SRD and designate the number of LVs as k ‡ to differentiate from k † used to denote the solution obtained with a single realization of λ used with M 2. A paired Wilcoxon signed rank test between the k = k ‡ solution and all others can be performed to find an alternate number of LVs for which the normalized SRD scores are not statistically significantly different . However, we find that this approach can undermine the constraint on the growth of RMSECV imposed by λ , so it is not used in this work.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We select as our solution the minimum value of the mean normalized SRD and designate the number of LVs as k ‡ to differentiate from k † used to denote the solution obtained with a single realization of λ used with M 2. A paired Wilcoxon signed rank test between the k = k ‡ solution and all others can be performed to find an alternate number of LVs for which the normalized SRD scores are not statistically significantly different . However, we find that this approach can undermine the constraint on the growth of RMSECV imposed by λ , so it is not used in this work.…”
Section: Methodsmentioning
confidence: 99%
“…For ambient samples in which we lack true reference values, we additionally compare an aggregate estimate of organic carbon (OC) estimated by the sum of FTIR functional groups with the measurements of OC obtained by a different but widely used analytical technique. A set of models selected from an ensemble of model performance curves generated by our proposed metric and combined by sum of ranking differences (SRD) are validated against an independent randomization test , and we further evaluate their suitability for application to laboratory and ambient samples.…”
Section: Introductionmentioning
confidence: 99%
“…The processes known as sum of ranking differences (SRD) is used in this study with numerous model quality measures to automatically select calibration tuning parameter values for λ , η , and the number of eigenvectors. The SRD has been shown to be effective in selecting up to 2 calibration tuning parameters and has been well described. Briefly, a matrix of model quality measures is formed with rows designating respective model quality measures and a column for each tuning parameter (tuning parameter triplet in this study).…”
Section: Sample‐wise Calibrationmentioning
confidence: 99%
“…However, harmonious models are not necessarily parsimonious [1]. The scope of the methodology has recently been extended with the idea of sum of ranking differences (SRD) for partial least squares and ridge regression models [2].…”
Section: Introductionmentioning
confidence: 99%
“…However, harmonious models are not necessarily parsimonious [1]. The scope of the methodology has recently been extended with the idea of sum of ranking differences (SRD) for partial least squares and ridge regression models [2].Principal-component analysis (PCA) has been applied by Geladi [3,4] and Todeschini et al [5] to find the best and worst regression and classification models, respectively. PCAs were completed on a matrix of regression vectors and dominant patterns (grouping, outliers) could be detected among the models.…”
mentioning
confidence: 99%