2022
DOI: 10.23940/ijpe.22.08.p2.545551
|View full text |Cite
|
Sign up to set email alerts
|

RNN LSTM-based Deep Hybrid Learning Model for Text Classification using Machine Learning Variant XGBoost

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 0 publications
0
0
0
Order By: Relevance
“…The second approach is to improve the structure of the model. Such as improving performance through nested models that combine multiple "good enough" models to achieve excellent predictive power, or replacing simple neuron units with complex LSTM neurons, for example, using LSTM models to exploit the advantages of grammar analysis [3][4]. The third is by adjusting the parameters of performance improvement, such as the initialization [5] of an improved model, to ensure that the early gradient has a large number of sparse, or take advantage of the principle of linear algebra [6], to initialize the learning rate, the size of batch size, regularization coefficient, dropout coefficient.…”
Section: Introductionmentioning
confidence: 99%
“…The second approach is to improve the structure of the model. Such as improving performance through nested models that combine multiple "good enough" models to achieve excellent predictive power, or replacing simple neuron units with complex LSTM neurons, for example, using LSTM models to exploit the advantages of grammar analysis [3][4]. The third is by adjusting the parameters of performance improvement, such as the initialization [5] of an improved model, to ensure that the early gradient has a large number of sparse, or take advantage of the principle of linear algebra [6], to initialize the learning rate, the size of batch size, regularization coefficient, dropout coefficient.…”
Section: Introductionmentioning
confidence: 99%