2022
DOI: 10.1016/j.ophoto.2022.100018
|View full text |Cite
|
Sign up to set email alerts
|

Spatially autocorrelated training and validation samples inflate performance assessment of convolutional neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 40 publications
(24 citation statements)
references
References 37 publications
0
15
0
Order By: Relevance
“…Note, that the transferability of the models was successfully tested with entirely independent image acquisitions, which may even include acquisition and site conditions that were not explicitly included in model training (cf. Kattenborn et al, 2022 ). Thus, the model presented here could be applied to quantify P. afra cover in restoration sites across the Albany Subtropical Thicket biome in South Africa.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Note, that the transferability of the models was successfully tested with entirely independent image acquisitions, which may even include acquisition and site conditions that were not explicitly included in model training (cf. Kattenborn et al, 2022 ). Thus, the model presented here could be applied to quantify P. afra cover in restoration sites across the Albany Subtropical Thicket biome in South Africa.…”
Section: Discussionmentioning
confidence: 99%
“…To avoid optimistic model evaluation by spatially autocorrelated training and validation data ( Ploton et al, 2020 ; Kattenborn et al, 2022 ), we randomly split all available data on a plot basis, where a portion of plots was used for model training ( n = 24) and the remainder used for model testing ( n = 8). The training data was again split in training (7/8) and validation data (1/8), whereas the validation data was used to monitor the training process.…”
Section: Methodsmentioning
confidence: 99%
“…1). Further work is necessary to explore how these dependences influence model prediction and interpretability (Kattenborn et al, 2022).…”
Section: Caveats and Future Researchmentioning
confidence: 99%
“…If training and test crowns are close to one another, spatial autocorrelative effects are likely to inflate the reported performance (52). To avoid this, individual tiles (rather than individual crowns) were assigned to training and test sets ensuring spatial separation.…”
Section: R a F Tmentioning
confidence: 99%
“…To be included in the training and test sets, a minimum crown polygon area coverage of a tile was set at 40%. Including overly sparse tiles was likely to lead to poor algorithm sensitivity while being too strict with coverage would have limited the amount of training and testing data available.If training and test crowns are close to one another, spatial autocorrelative effects are likely to inflate the reported performance(53). To avoid this, individual tiles (rather than individual crowns) were assigned to training and test sets ensuring spatial separation.…”
mentioning
confidence: 99%