2017
DOI: 10.1016/j.isprsjprs.2017.09.012
|View full text |Cite
|
Sign up to set email alerts
|

The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 21 publications
(34 citation statements)
references
References 50 publications
0
34
0
Order By: Relevance
“…However, unlike traditional empirical models, most machine learning models use iterative learning to reduce overall error and maximize model fit [191]. Depending on the parameterization of the model and the amount of training data available, this approach may lead to over-fitting of the data, especially in models with numerous input variables subject to collinearity such as adjacent hyperspectral bands [192]. To avoid overfitting, machine learning methods require the provision of separate training and testing datasets that contain representative samples of the parameters of interest.…”
Section: Machine Learning Modelsmentioning
confidence: 99%
“…However, unlike traditional empirical models, most machine learning models use iterative learning to reduce overall error and maximize model fit [191]. Depending on the parameterization of the model and the amount of training data available, this approach may lead to over-fitting of the data, especially in models with numerous input variables subject to collinearity such as adjacent hyperspectral bands [192]. To avoid overfitting, machine learning methods require the provision of separate training and testing datasets that contain representative samples of the parameters of interest.…”
Section: Machine Learning Modelsmentioning
confidence: 99%
“…Two tuning methods were applied to select model complexity: traditional crossvalidation and a novel method called Naïve Overfitting Index Selection (NOIS) (Rocha et al, 2017). When tuning a model with cross-validation (we used 10fold cross-validation), a model is selected with a complexity that minimises the Root Mean Squared Error (RMSE) of the predictions from the validation subsets (Hastie et al, 2009).…”
Section: Modelling and Performance Assessmentmentioning
confidence: 99%
“…This procedure was randomly repeated ten times, resulting in a combination of 100 subsets of training and validation sets from the original data (James et al, 2013). The NOIS method selects model complexity considering an a priori level of overfitting tolerated by the user (we used 5%; see Rocha et al, 2017(Rocha et al, 2017 for details). The complexity selected for models tuned with cross-validation varied according to the landscape.…”
Section: Modelling and Performance Assessmentmentioning
confidence: 99%
See 2 more Smart Citations