2019
DOI: 10.1007/978-3-030-20521-8_49
|View full text |Cite
|
Sign up to set email alerts
|

Multi-input CNN for Text Classification in Commercial Scenarios

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…Due to its characteristics, it is difficult to draw any conclusions about the combination of text encoding techniques. What was already confirmed in [24] is that BPE tokenisation has a very positive effect, especially on agglutinative and inflected languages such as Swedish, where the difference in scores over other techniques was significant. However, in this work, the techniques applied at the BPE level always appear in the final combination of embeddings.…”
Section: Results Analysismentioning
confidence: 60%
See 2 more Smart Citations
“…Due to its characteristics, it is difficult to draw any conclusions about the combination of text encoding techniques. What was already confirmed in [24] is that BPE tokenisation has a very positive effect, especially on agglutinative and inflected languages such as Swedish, where the difference in scores over other techniques was significant. However, in this work, the techniques applied at the BPE level always appear in the final combination of embeddings.…”
Section: Results Analysismentioning
confidence: 60%
“…In this work, the multi-input Convolutional Neural Network from [24] was expanded to make it possible to train the network with any text encoding technique as input. We included experiments using four different text encoding techniques such as the Keras embedding layer, GloVe, BERT embeddings, and ParagraphVector.…”
Section: Results Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…To address the scarcity problem of MT parallel data in specific-domain, data selection methods utilise an initial in-domain training data to select relevant additional sentences from a generic parallel corpus. Previous research has used n-gram language model (Moore and Lewis, 2010; Axelrod et al, 2011;Duh et al, 2013), count-based methods Parcheta et al, 2018), similarity score of sentence embeddings (Wang et al, 2017;Junczys-Dowmunt, 2018;Dou et al, 2020) to rank the generic corpus. The ranking and selection process often operate in the same language, either source or target language, and take advantage of the parallel corpus to retrieve the paired translation (Farajian et al, 2017).…”
Section: Unsupervised Domain Adaptation Of Nmtmentioning
confidence: 99%
“…A text classification using a Multi-input convolutional neural network is proposed [10]. In this paper, the author executed pre-processing at two levels i.e.…”
Section: Introductionmentioning
confidence: 99%