Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue 2019
DOI: 10.18653/v1/w19-5931
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Multi-Task Natural Language Understanding for Cross-domain Conversational AI: HERMIT NLU

Abstract: We present a new neural architecture for widecoverage Natural Language Understanding in Spoken Dialogue Systems. We develop a hierarchical multi-task architecture, which delivers a multi-layer representation of sentence meaning (i.e., Dialogue Acts and Frame-like structures). The architecture is a hierarchy of self-attention mechanisms and BiLSTM encoders followed by CRF tagging layers. We describe a variety of experiments, showing that our approach obtains promising results on a dataset annotated with Dialogu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 22 publications
(10 citation statements)
references
References 28 publications
0
8
0
Order By: Relevance
“…System Descriptions: We evaluate SLURP against two state-of-the-art NLU models: HerMiT (Vanzo et al, 2019) and SF-ID (E et al, 2019). Both systems achieved state-of-the-art results on the NLU Benchmark (Liu et al, 2019) and on ATIS/Snips respectively.…”
Section: Semantic Evaluationmentioning
confidence: 99%
“…System Descriptions: We evaluate SLURP against two state-of-the-art NLU models: HerMiT (Vanzo et al, 2019) and SF-ID (E et al, 2019). Both systems achieved state-of-the-art results on the NLU Benchmark (Liu et al, 2019) and on ATIS/Snips respectively.…”
Section: Semantic Evaluationmentioning
confidence: 99%
“…Joint training of Intent Recognition and Entity Extraction models have been explored recently (Zhang and Wang, 2016;Liu and Lane, 2016;Goo et al, 2018;Varghese et al, 2020). Several hierarchical multi-task architectures are proposed for these joint NLU approaches (Zhou et al, 2016;Wen et al, 2018;Okur et al, 2019;Vanzo et al, 2019), few of them in multimodal context (Gu et al, 2017;Okur et al, 2020). Vaswani et al (2017) proposed the Transformer as a novel neural network architecture based entirely on attention mechanisms (Bahdanau et al, 2015).…”
Section: Related Workmentioning
confidence: 99%
“…System Descriptions: We evaluate SLURP against two state-of-the-art NLU models: HerMiT (Vanzo et al, 2019) and SF-ID (E et al, 2019). Both systems achieved state-of-the-art results on the NLU Benchmark (Liu et al, 2019) and on ATIS/Snips respectively.…”
Section: Semantic Evaluationmentioning
confidence: 99%