2020
DOI: 10.1016/j.neucom.2019.08.080
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating context-relevant concepts into convolutional neural networks for short text classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(25 citation statements)
references
References 1 publication
0
25
0
Order By: Relevance
“…e task of multitext reading comprehension needs to screen out important content and answers from multiple documents, and providing answers is only part of the document. When it is necessary to extract the content and answers of multiple documents, the existence of multiple interference information will increase the difficulty of machine reading comprehension and screening answers to a certain extent [14][15][16]. In the face of massive documents, multitext machine reading comprehension needs to first retrieve and filter multiple documents and then conduct single text retrieval on the filtered documents to capture the content and answers in the documents.…”
Section: Model Framework Of English Multitext Readingmentioning
confidence: 99%
“…e task of multitext reading comprehension needs to screen out important content and answers from multiple documents, and providing answers is only part of the document. When it is necessary to extract the content and answers of multiple documents, the existence of multiple interference information will increase the difficulty of machine reading comprehension and screening answers to a certain extent [14][15][16]. In the face of massive documents, multitext machine reading comprehension needs to first retrieve and filter multiple documents and then conduct single text retrieval on the filtered documents to capture the content and answers in the documents.…”
Section: Model Framework Of English Multitext Readingmentioning
confidence: 99%
“…Thus, the introduction of contextual information helps to provide supplementary knowledge for classification accuracy. The context-aware technique is widely used in natural language processing (NLP) tasks such as text classification (Xu et al, 2020 ), while few-shot learning is often used in image classification, and it has been a meaningful challenge in a modeling context in computer vision tasks (Bell et al, 2016 ; Kantorov et al, 2016 ). Research has been proposed to achieve object detection (Chen and Gupta, 2017 ; Wang et al, 2018 ) and image classification (Zhang et al, 2020 ) with contexts, and (Kamara et al, 2020 ) combines contextual information in time series classification.…”
Section: Related Workmentioning
confidence: 99%
“…Convolutional Neural Network Convolutional neural networks (CNNs) have demonstrated exceptional success in NLP tasks such as document classification, language modeling, or machine translation (Lin et al, 2018). As Xu et al (2020) described, CNN models can produce consistent performance when applied to the various text types such as short sequences. We evaluated a CNN architecture (Shen et al, 2018) with a convolutional layer, followed by batch normalization, ReLU, and a dropout layer, which was followed by a maxpooling layer.…”
Section: Model Architecture and Trainingmentioning
confidence: 99%