2020
DOI: 10.1016/j.measurement.2020.107777
|View full text |Cite
|
Sign up to set email alerts
|

Mutation grey wolf elite PSO balanced XGBoost for radar emitter individual identification based on measured signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 40 publications
0
10
0
Order By: Relevance
“…Moreover, in contrast with other ensemble learning methods that only use the information of the first-order derivative, the first- and second-order derivatives were considered together, and the second-order Taylor expansion of the cost function was executed in the XGBoost model. It is worth noting that the XGBoost model has been widely used in various fields due to the superiority of its principle [ 27 , 28 ]. The XGboost library in Python was selected for experiments, and the key parameters learning_rate = 0.01, n_estimators = 1000, max_depth = 4, min_child_weight = 1, gamma = 0, and subsample = 0.8 were set.…”
Section: Methodsmentioning
confidence: 99%
“…Moreover, in contrast with other ensemble learning methods that only use the information of the first-order derivative, the first- and second-order derivatives were considered together, and the second-order Taylor expansion of the cost function was executed in the XGBoost model. It is worth noting that the XGBoost model has been widely used in various fields due to the superiority of its principle [ 27 , 28 ]. The XGboost library in Python was selected for experiments, and the key parameters learning_rate = 0.01, n_estimators = 1000, max_depth = 4, min_child_weight = 1, gamma = 0, and subsample = 0.8 were set.…”
Section: Methodsmentioning
confidence: 99%
“…XGBoost is a more complicated model than random forest (RF) and compatible with column subsampling, thus making it outperform RF model on training loss, despite of being subject to over-fitting likewise. It is worth noting that XGBoost model has been widely used in various fields due to the superiority of its principle [39,40]. In our study, the PseAAC algorithm was used to extract the primary structural, physical and chemical features of the samples, then the benchmark feature vectors were input into the XGBoost model to train, and thereafter the prediction model was obtained through 5-fold cross-validation.…”
Section: Extreme Gradient Boostingmentioning
confidence: 99%
“…As an improved algorithm of GBTD, XGB uses all data in each iteration, which is similar to RF [53,54]. erefore, XGB reduces the complexity of the model and makes the learned model simpler [35,[54][55][56][57][58]. In this study, four hyperparameters in GBTD and XGB models (i.e., n_estimators, max_depth, learning_rate (lr), and minimum loss reduction) required to make a further partition on a leaf node of the tree (gamma) were empirically tuned based on RMSE.…”
Section: Machine-learning Algorithmmentioning
confidence: 99%