2023
DOI: 10.3390/rs15205013
|View full text |Cite
|
Sign up to set email alerts
|

Hyperspectral Prediction Model of Nitrogen Content in Citrus Leaves Based on the CEEMDAN–SR Algorithm

Changlun Gao,
Ting Tang,
Weibin Wu
et al.

Abstract: Nitrogen content is one of the essential elements in citrus leaves (CL), and many studies have been conducted to determine the nutrient content in CL using hyperspectral technology. To address the key problem that the conventional spectral data-denoising algorithms directly discard high-frequency signals, resulting in missing effective signals, this study proposes a denoising preprocessing algorithm, complete ensemble empirical mode decomposition with adaptive noise joint sparse representation (CEEMDAN–SR), fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 53 publications
0
1
0
Order By: Relevance
“…After multiple training iterations, the model's performance is gradually improved. Before modeling, the spectral data were preprocessed using the Savitzky-Golay (SG) [34][35][36] and standard normalized variate (SNV) [37][38][39] algorithm to remove noise and background effects from the spectral data. Through leave-one-out cross-validation and sk cross-validation with 2-fold, 5-fold, and 10-fold configurations, we identified the optimal cross-validation approach for the current dataset.…”
Section: Convolutional Autoencodermentioning
confidence: 99%
“…After multiple training iterations, the model's performance is gradually improved. Before modeling, the spectral data were preprocessed using the Savitzky-Golay (SG) [34][35][36] and standard normalized variate (SNV) [37][38][39] algorithm to remove noise and background effects from the spectral data. Through leave-one-out cross-validation and sk cross-validation with 2-fold, 5-fold, and 10-fold configurations, we identified the optimal cross-validation approach for the current dataset.…”
Section: Convolutional Autoencodermentioning
confidence: 99%
“…Occlusion and large distances of litchi in images often result in large differences in litchi scales. Large targets at the front of images occupy most of the area, while individual small targets, especially the head area, account for less than 1% of the image size, and feature information is easily lost [43]. In [40], an additional transformer module detection head was added to a detection model, achieving good results for high-speed and low-altitude flights on datasets with densely packed objects with drastic scale changes.…”
Section: Head Prediction Branch Improvementsmentioning
confidence: 99%