2018
DOI: 10.1109/access.2018.2882878
|View full text |Cite
|
Sign up to set email alerts
|

Combining Convolution Neural Network and Bidirectional Gated Recurrent Unit for Sentence Semantic Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 49 publications
(32 citation statements)
references
References 22 publications
0
31
0
Order By: Relevance
“…Although this study combined BiGRU and CNN, it could not offer promising results. Contrary to BiGRU+CNN, our model carries solely a BiLSTM network and provided 1.2 percentage points higher accuracy than [36]. Finally, Usama et al [37] presented various models by encountering multilevel and multitype fusion of features along with the combination of LSTM, GRU, and CNN models.…”
Section: A Mr Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Although this study combined BiGRU and CNN, it could not offer promising results. Contrary to BiGRU+CNN, our model carries solely a BiLSTM network and provided 1.2 percentage points higher accuracy than [36]. Finally, Usama et al [37] presented various models by encountering multilevel and multitype fusion of features along with the combination of LSTM, GRU, and CNN models.…”
Section: A Mr Resultsmentioning
confidence: 99%
“…Conversely, our approach used a single-layered BiLSTM network and achieved 0.5 and 0.6 percentage points greater accuracy than ALE-LSTM and WALE-LSTM, respectively. Also, Zhang et al [36] suggested a novel architecture by leveraging the series combination of bidirectional gated recurrent unit (BiGRU) and CNN (BiGRU+CNN) and achieved an accuracy of 78.30% on MR dataset. Although this study combined BiGRU and CNN, it could not offer promising results.…”
Section: A Mr Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To obtain and filter the features of the relationship between the two parts, the BiLSTM model is applied, and it can filter the features of the discourse and obtain a low-dimensional vector to reduce the complexity of the model. Other RNN models, such as BGRU [31], can also extract text features in the discourse semantic analysis task. However, in order to compare our results with the baseline models, which applied LSTM or BiLSTM as their main network, we choose BiLSTM as the feature extraction network of our model.…”
Section: B Feature Fusionmentioning
confidence: 99%
“…where i represents the number of the current layer. Lin et al [31] added a modulating factor (1 − p i ) γ to the cross entropy loss, with a tunable focusing parameter γ ≥ 0, and α i ∈ [0, 1] balances the importance of positive and negative examples. Because the data distribution features of each child node are similar, the feature screening is the same as in (5).…”
Section: Improved Loss Functionmentioning
confidence: 99%