2021
DOI: 10.1016/j.comcom.2021.02.020
|View full text |Cite
|
Sign up to set email alerts
|

An intelligent work order classification model for government service based on multi-label neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…In this paper, TextRCNN is used to extract the depth features of the microblog text. Transformer, TextRNN_Att, TextCNN, DPCNN, and TextRNN are used as comparison models, which are extensively utilized in the field of natural language processing (Huang et al, 2021;Yang et al, 2022. Anaconda + Python3.7 + Pytorch1.13.1 is The accuracy, recall, and F values of all classifiers are presented in Table 7.…”
Section: Results Of Textrnn Extracting Deep Featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, TextRCNN is used to extract the depth features of the microblog text. Transformer, TextRNN_Att, TextCNN, DPCNN, and TextRNN are used as comparison models, which are extensively utilized in the field of natural language processing (Huang et al, 2021;Yang et al, 2022. Anaconda + Python3.7 + Pytorch1.13.1 is The accuracy, recall, and F values of all classifiers are presented in Table 7.…”
Section: Results Of Textrnn Extracting Deep Featuresmentioning
confidence: 99%
“…The model combines the advantages of convolutional neural networks and time series neural networks. As shown in Figure 4, first, word embedding is performed on the microblog text; second, the word vector is encoded from both positive and negative directions with a bidirectional long‐short‐term memory network model (BiLSTM) to mine long‐distance dependencies and contextual semantic knowledge (Huang et al, 2021; Yang et al, 2022).…”
Section: Research Frameworkmentioning
confidence: 99%
“…For the MPOA data set, the text category is 2, the average sentence length is 3, the training set contains 9,500 words, and the testing set contains 1,105 words. The classification effects of LSTM ( Chen et al, 2020 ), Bi-LSTM ( Ye et al, 2019 ), Text CNN ( Xie et al, 2020 ), Text RCNN ( Huang et al, 2021 ), attention-based LSTM (ALSTM) ( Liu et al, 2018 ), and attention-based Bi-LSTM (ABi-LSTM) ( Song et al, 2018 ) are compared. The number of hidden layer neurons in the LSTM, Bi-LSTM, GRU, and RCNN model is set to 128, the text batch size is set to 100, the convolution kernel size in the CNN model is set to 3*3, and the number of convolution kernels is set to 128.…”
Section: Resultsmentioning
confidence: 99%
“…For the MPOA data set, the text category is 2, the average sentence length is 3, the training set contains 9,500 words, and the testing set contains 1,105 words. The classification effects of LSTM (Chen et al, 2020), Bi-LSTM (Ye et al, 2019), Text CNN (Xie et al, 2020), Text RCNN (Huang et al, 2021), attention-based LSTM (ALSTM) (Liu et al, 2018), and attention-based Bi-LSTM (ABi-LSTM) (Song et al, 2018) The difference in classification accuracy of different models on different data sets is quantitatively analyzed, and the results are shown in Table 1. Compared with the LSTM, Bi-LSTM, TextCNN, TextRCNN, ALSTM, and ABi-LSTM models, the classification accuracy of the ALGCNN model constructed on the MR data set is improved by 26.5, 32.3, 6.6, 3.9, 9.0, and 3.5%, respectively; the classification accuracy on the SST data set of the ALGCNN model constructed has increased by 25.0, 29.8, 2.0, 0.0, 14.2, and 8.0%, respectively; and the classification accuracy rate on the MPQA data set has increased by 20.6, 17.3, 6.0, 2.1, 9.2, and 2.9%, respectively.…”
Section: Simulation Verification Of Algcnn Modelmentioning
confidence: 99%
“…Textual implication recognition, also known as natural language inference, is a fundamental yet challenging task in the field of natural language processing [4]. The goal of this task is to determine the directed semantic relationship between two consecutive texts, where the embodied antecedent is noted as the text T and the embodied consequent is noted as hypothesis H [5].…”
mentioning
confidence: 99%