2002
DOI: 10.1255/nirn.689
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Calibrations: SEP, RPD, RER and R2

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
207
0
4

Year Published

2007
2007
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 286 publications
(218 citation statements)
references
References 1 publication
7
207
0
4
Order By: Relevance
“…An RPD value greater than three was considered adequate for analytical purposes in most of NIR applications for agricultural products (Williams, 2001;Fearn, 2002).…”
Section: Wool Samplesmentioning
confidence: 99%
“…An RPD value greater than three was considered adequate for analytical purposes in most of NIR applications for agricultural products (Williams, 2001;Fearn, 2002).…”
Section: Wool Samplesmentioning
confidence: 99%
“…Similarly except raw data and the baseline corrected data models of reflectance spectra gave RPD values of more than 3.0 and SEP as well as RMSEP values of less than 1.0. Since the RPD values in the range of 3.1 to 4.9 are good for screening purposes [16] [17] (Fearn, 2002;Williams, 2001), models that show more than 3.0 were considered as good models. Baseline and MSC corrected reflectance spectra with 1 st derivative model gave the highest RPD value of 4.46 and R 2 of 0.95, also its SEP (0.670) and RMSEP (0.782) are less than the other models with RPD more than 3.0.…”
Section: Resultsmentioning
confidence: 99%
“…RPD values of 1 or less indicate that the model equation predicts the same values in random chances only. RPD values in the range of 3.1 to 4.9 are good for screening purposes and values in the range 5 to 6.4 are good for quality control purposes [16] [17] (Fearn, 2002;Williams, 2001). Bias values, which correspond to the average difference between the standard reference and predicted values, were also considered in model selection.…”
Section: Discussionmentioning
confidence: 99%
“…Such analysis requires a simple yet sturdy model that is capable of maintaining its predictive capability over a prolonged period, coupled with instrumentation that is similarly robust in terms of operational lifetime. The capability of the calibration model to successfully predict unknown samples (i.e., samples not present in the calibration set used to construct the model) must also be assessed; this is done by applying the model to a small number of samples for which the models target property for prediction is defined [3,33,[50][51][52][53][54][55]. Once the model's results are comparable with those of the reference values, the model can be considered to be accurate and useful for determining that target property in the future analysis of unknown samples.…”
Section: Chemometricsmentioning
confidence: 99%
“…An assessment of the model's accuracy is essential to avoid overfitting; consequently, different validation procedures should be applied, as a calibration model without validation is nonsense. In feasibility studies, cross-validation is a practical method to demonstrate that the instrumental method can predict something; however, the predictive ability of the method needs to be demonstrated using an independent validation set [3,33,[50][51][52][53][54][55][56][57][58].…”
Section: Chemometricsmentioning
confidence: 99%