2016 IEEE Spoken Language Technology Workshop (SLT) 2016
DOI: 10.1109/slt.2016.7846314
|View full text |Cite
|
Sign up to set email alerts
|

Neural dialog state tracker for large ontologies by attention mechanism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 10 publications
0
11
0
Order By: Relevance
“…To motivate the work presented here, we categorise prior research according to their reliance (or otherwise) on a separate SLU module for interpreting user utterances: 1 Separate SLU Traditional SDS pipelines use Spoken Language Understanding (SLU) decoders to detect slot-value pairs expressed in the Automatic Speech Recognition (ASR) output. The downstream DST model then combines this information with the past dialogue context to update the belief state Wang and Lemon, 2013;Perez, 2016;Sun et al, 2016;Jang et al, 2016;Shi et al, 2016;Dernoncourt et al, 2016;Vodolán et al, 2017). Figure 3: Architecture of the NBT Model.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To motivate the work presented here, we categorise prior research according to their reliance (or otherwise) on a separate SLU module for interpreting user utterances: 1 Separate SLU Traditional SDS pipelines use Spoken Language Understanding (SLU) decoders to detect slot-value pairs expressed in the Automatic Speech Recognition (ASR) output. The downstream DST model then combines this information with the past dialogue context to update the belief state Wang and Lemon, 2013;Perez, 2016;Sun et al, 2016;Jang et al, 2016;Shi et al, 2016;Dernoncourt et al, 2016;Vodolán et al, 2017). Figure 3: Architecture of the NBT Model.…”
Section: Introductionmentioning
confidence: 99%
“…Separate SLU Traditional SDS pipelines use Spoken Language Understanding (SLU) decoders to detect slot-value pairs expressed in the Automatic Speech Recognition (ASR) output. The downstream DST model then combines this information with the past dialogue context to update the belief state Wang and Lemon, 2013;Perez, 2016;Sun et al, 2016;Jang et al, 2016;Shi et al, 2016;Dernoncourt et al, 2016;Vodolán et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…The dialogue state tracking (DST) problem has attracted the research community for years. The traditional DST models focus on single domain di-alogue state tracking Wang and Lemon, 2013;Liu and Perez, 2017;Jang et al, 2016;Shi et al, 2016;Vodolán et al, 2017;Yu et al, 2015;Henderson et al, 2014;Zilka and Jurcícek, 2015;Mrksic et al, 2017;Xu and Hu, 2018;Zhong et al, 2018;Ren et al, 2018). Some of these models solve DST problem by incorporating a natural language understanding (NLU) module Wang and Lemon, 2013) or jointly modeling NLU and DST (Henderson et al, 2014;Zilka and Jurcícek, 2015), which rely on hand-crafted features or delexicalisation features.…”
Section: Related Workmentioning
confidence: 99%
“…Traditional works deal with the DST task using Spoken Language Understanding (SLU), including (Thomson and Young, 2010;Wang and Lemon, 2013;Liu and Perez, 2017;Jang et al, 2016;Shi et al, 2016;Vodolán et al, 2017). Joint modeling of SLU and DST (Henderson et al, 2014c;Zilka and Jurcícek, 2015;Mrksic et al, 2015) has also been presented and shown to outperform the separate SLU models.…”
Section: Related Workmentioning
confidence: 99%