2021
DOI: 10.1016/j.saa.2021.119657
|View full text |Cite
|
Sign up to set email alerts
|

Prediction of tea theanine content using near-infrared spectroscopy and flower pollination algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(5 citation statements)
references
References 45 publications
0
5
0
Order By: Relevance
“…The methods for classification modeling are the Random Forest Classifier (RF), the K Nearest Neighbor Classifier (KNN), the Linear Discriminant Classifier (LDC), Support Vector Machines (SVMs), Extreme Learning Machines (ELMs), and the Naive Bayes Classifier (NB) [ 69 , 70 , 71 , 72 , 73 ]. Methods for regression modeling are Partial Least Squares Regression (PLSR), Multiple Linear Regression (MLR), Support Vector Regression (SVR), Extreme Learning Machine Regression (ELMR), Gaussian Process Regression (GPR), Stochastic Gradient Boosting (SGB), Kernel-based Extreme Learning Machines (KELM)s, and Random Forest Regression (RFR) [ 74 , 75 , 76 , 77 , 78 ].…”
Section: Hyperspectral Information Analysis Methods For Tea Fresh Lea...mentioning
confidence: 99%
“…The methods for classification modeling are the Random Forest Classifier (RF), the K Nearest Neighbor Classifier (KNN), the Linear Discriminant Classifier (LDC), Support Vector Machines (SVMs), Extreme Learning Machines (ELMs), and the Naive Bayes Classifier (NB) [ 69 , 70 , 71 , 72 , 73 ]. Methods for regression modeling are Partial Least Squares Regression (PLSR), Multiple Linear Regression (MLR), Support Vector Regression (SVR), Extreme Learning Machine Regression (ELMR), Gaussian Process Regression (GPR), Stochastic Gradient Boosting (SGB), Kernel-based Extreme Learning Machines (KELM)s, and Random Forest Regression (RFR) [ 74 , 75 , 76 , 77 , 78 ].…”
Section: Hyperspectral Information Analysis Methods For Tea Fresh Lea...mentioning
confidence: 99%
“…In PLSR, the high-dimensional data is projected onto a small number of latent variables (LVs) to find the optimal regression coefficients so that a linear combination of the input variables maximizes the covariance between the LVs and the output. When one supposes that the number of LVs is h ( h ≦ p ), the regression coefficients are calculated as follows [ 30 ]:…”
Section: Methodsmentioning
confidence: 99%
“…Subsequently, the hyperspectral characteristic parameters were sorted according to the absolute value of the regression coefficients obtained from the PLSR model. Each time, the HCP with the lowest value was eliminated, and the best combination of independent variables was the independent variable with the largest training determination coefficient during the reverse elimination process [44].…”
Section: Variable Screeningmentioning
confidence: 99%