2021
DOI: 10.1155/2021/6664893
|View full text |Cite
|
Sign up to set email alerts
|

An Improved Double Channel Long Short-Term Memory Model for Medical Text Classification

Abstract: There are a large number of symptom consultation texts in medical and healthcare Internet communities, and Chinese health segmentation is more complex, which leads to the low accuracy of the existing algorithms for medical text classification. The deep learning model has advantages in extracting abstract features of text effectively. However, for a large number of samples of complex text data, especially for words with ambiguous meanings in the field of Chinese medical diagnosis, the word-level neural network … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…However, the training speed of the CBOW model is slow. Inspired by the FastText [36] model, some scholars proposed some models such as CNN-LSTM [37] and DC-LSTM [38], and they used FastText to calculate word vectors and used LSTM model to solve the difficulty of information transmission caused by long sequence data. Edara et al [31] analyzed moods of various cancer affected patients by collecting tweets from different online cancer-supported communities.…”
Section: Plos Onementioning
confidence: 99%
“…However, the training speed of the CBOW model is slow. Inspired by the FastText [36] model, some scholars proposed some models such as CNN-LSTM [37] and DC-LSTM [38], and they used FastText to calculate word vectors and used LSTM model to solve the difficulty of information transmission caused by long sequence data. Edara et al [31] analyzed moods of various cancer affected patients by collecting tweets from different online cancer-supported communities.…”
Section: Plos Onementioning
confidence: 99%
“…Venkataraman et al 31 trained an RNN to automate clinical record assignment. Liang et al 32 improved LSTM with a dual-channel mechanism for Chinese medical text classification. Bangyal et al 33 applied RNN and LSTM to COVID-19 fake news detection, a medical text classification task.…”
Section: Related Workmentioning
confidence: 99%
“…A two-channel LSTM method that is commonly used in areas such as text classification (Liang et al 2021) and that uses both the Niño-3.4 index and SC-PRECIP as inputs to predict both of the indices and to capture the nonlinear shared variability of the two indices also performs poorly in comparison with the MIMO-AE index (Fig. 10).…”
Section: Sc-precip Predictability From Mimo-ae Indexmentioning
confidence: 99%
“…Multitask learning, on the other hand, solves multiple learning tasks at the same time while exploiting commonalities and differences across tasks (Caruana 1997). It has been applied to many problems including natural language processing (Chen et al 2021), speech recognition (Toshniwal et al 2017), and computer vision (Cipolla et al 2018) to improve the prediction accuracy and learning efficiency of task-specific models. Ghifary et al (2015) implemented multitask learning with a multioutput autoencoder, for related cross-domain object recognition.…”
Section: Introductionmentioning
confidence: 99%