2020
DOI: 10.1109/access.2020.2972925
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Level Cross-Lingual Transfer Learning With Language Shared and Specific Knowledge for Spoken Language Understanding

Abstract: Recently conversational agents effectively improve their understanding capabilities by neural networks. Such deep neural models, however, do not apply to most human languages due to the lack of annotated training data for various NLP tasks. In this paper, we propose a multi-level cross-lingual transfer model with language shared and specific knowledge to improve the spoken language understanding of lowresource languages. Our method explicitly separates the model into the language-shared part and languagespecif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3

Relationship

3
6

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 29 publications
(39 reference statements)
0
10
0
Order By: Relevance
“…For a fair comparison, we adopt the same network architecture BiLSTM (Mesnil et al, 2015;Liu and Lane, 2016;Weiran and Chunyun, 2016;Goo et al, 2018;Haihong et al, 2019;Xu et al, 2020;He et al, 2020b;He et al, 2020a) as (Lin and Xu, 2019). We train the BiLSTM on the in-domain data and employ the pre-trained classifier as a feature extractor.…”
Section: Neural Intent Classifiermentioning
confidence: 99%
“…For a fair comparison, we adopt the same network architecture BiLSTM (Mesnil et al, 2015;Liu and Lane, 2016;Weiran and Chunyun, 2016;Goo et al, 2018;Haihong et al, 2019;Xu et al, 2020;He et al, 2020b;He et al, 2020a) as (Lin and Xu, 2019). We train the BiLSTM on the in-domain data and employ the pre-trained classifier as a feature extractor.…”
Section: Neural Intent Classifiermentioning
confidence: 99%
“…Joint versus Separate Training. NLU approaches can be divided into two groups depending on whether they tackle intent classification and slot filling (i) jointly, in multi-task training regimes (Schuster et al, 2019a;Liu et al, 2019b;Bunk et al, 2020, inter alia) or (ii) independently, addressing only one of the tasks or training an independent model for each of them (Ren and Xue, 2020;He et al, 2020;Arora et al, 2020, inter alia). Joint multi-task training, besides potentially reducing the number of parameters, is advantageous for NLU (Zhang et al, 2019b), as, the two tasks are clearly interdependent: intuitively, the slots for which the values may be provided in an utterance also depend on the intent of the utterance.…”
Section: Natural Language Understanding (Nlu)mentioning
confidence: 99%
“…Adversarial approaches aiming to account for linguistic differences across languages by dividing the model into language-shared and language-specific representations, have been explored for the SLU sub-tasks. Recently, He et al (2020) investigated the sub-tasks in isolation using BiLSTMs and focused on improving SLU for low-resource languages. Meanwhile, Chen et al (2019b) explored BiLSTMs to improve named entity recognition which is close to the SF sub-task of SLU.…”
Section: Related Workmentioning
confidence: 99%