2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) 2019
DOI: 10.1109/asru46091.2019.9003902
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning for Context-Aware Spoken Language Understanding

Abstract: Spoken language understanding (SLU) is a key component of task-oriented dialogue systems. SLU parses natural language user utterances into semantic frames. Previous work has shown that incorporating context information significantly improves SLU performance for multi-turn dialogues. However, collecting a large-scale human-labeled multi-turn dialogue corpus for the target domains is complex and costly. To reduce dependency on the collection and annotation effort, we propose a Context Encoding Language Transform… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 26 publications
(45 reference statements)
0
5
0
Order By: Relevance
“…The BERT model [Devlin (2019)] and its derivatives have been fine-tuned for dialogue act classification, achieving state-of-the-art results by leveraging deep bidirectional representations. Furthermore, the JointBERT model [Chen (2019)] combines intent detection and slot filling with dialogue act classification, showcasing the power of multi-task learning.…”
Section: Related Workmentioning
confidence: 99%
“…The BERT model [Devlin (2019)] and its derivatives have been fine-tuned for dialogue act classification, achieving state-of-the-art results by leveraging deep bidirectional representations. Furthermore, the JointBERT model [Chen (2019)] combines intent detection and slot filling with dialogue act classification, showcasing the power of multi-task learning.…”
Section: Related Workmentioning
confidence: 99%
“…(2013) found that using previous utterances as contexts in an SVM-HMM SLU system could help resolve ambiguities. Recently, deep learning approaches have become increasingly popular to incorporate contextual information (Qin et al 2021;Su, Yuan, and Chen 2019;Abro et al 2019;Chen et al 2019;Su, Yuan, and Chen 2018;Gupta, Rastogi, and Hakkani-Tur 2018;Chen et al 2016;Wei et al 2021). proposed to use end-to-end memory networks to model previous utterance transcripts in multi-turn dialogues.…”
Section: Introductionmentioning
confidence: 99%
“…Contexts have been shown to significantly improve performance separately for ASR [7][8][9][10][11][12][13][14][15] and NLU [11,[16][17][18][19][20][21]. -Dialogue Act: [(a=REQUEST, s=Item), a=REQUEST, s=Quantity)] Fig.…”
Section: Introductionmentioning
confidence: 99%