2022
DOI: 10.1109/access.2022.3228600
|View full text |Cite
|
Sign up to set email alerts
|

CNO-LSTM: A Chaotic Neural Oscillatory Long Short-Term Memory Model for Text Classification

Abstract: Long Short-Term Memory (LSTM) networks are unique to exercise data in its memory cell with long-term memory as Natural Language Processing (NLP) tasks have inklings of intensive time and computational power due to their complex structures like magnitude language model Transformer required to pre-train and learn billions of data performing different NLP tasks. In this paper, a dynamic chaotic model is proposed for the objective of transforming neurons states in network with neural dynamic characteristics by res… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 49 publications
(50 reference statements)
0
0
0
Order By: Relevance
“…Such models could even be outperformed by non-deep models like HIVE-COTE [25] and ROCKET [26]. LSTM-based models are capable of capturing both long-and short-term dependencies and have achieved state-of-the-art results in natural language processing (NLP) tasks [19], [20]. While transformer [27] and its variants utilize multi-head attention mechanisms to learn dependencies in the sequences, achieving impressive results in MTS-forecasting through unsupervised pre-training.…”
Section: Introductionmentioning
confidence: 99%
“…Such models could even be outperformed by non-deep models like HIVE-COTE [25] and ROCKET [26]. LSTM-based models are capable of capturing both long-and short-term dependencies and have achieved state-of-the-art results in natural language processing (NLP) tasks [19], [20]. While transformer [27] and its variants utilize multi-head attention mechanisms to learn dependencies in the sequences, achieving impressive results in MTS-forecasting through unsupervised pre-training.…”
Section: Introductionmentioning
confidence: 99%