Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.174
|View full text |Cite
|
Sign up to set email alerts
|

Intention Reasoning Network for Multi-Domain End-to-end Task-Oriented Dialogue

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…Natural language understanding (NLU) is an important component of dialogue systems, including the intent classification task and the slot filling task. There are a lot of NLU methods (Goo et al 2018;Wang, Shen, and Jin 2018;Liu et al 2020;Ma et al 2021b;Rosenbaum et al 2022;Zheng et al 2023;Ma et al 2022). Qin et al directly took the output of the intent task as the input to the slot task.…”
Section: Related Work Natural Language Understandingmentioning
confidence: 99%
“…Natural language understanding (NLU) is an important component of dialogue systems, including the intent classification task and the slot filling task. There are a lot of NLU methods (Goo et al 2018;Wang, Shen, and Jin 2018;Liu et al 2020;Ma et al 2021b;Rosenbaum et al 2022;Zheng et al 2023;Ma et al 2022). Qin et al directly took the output of the intent task as the input to the slot task.…”
Section: Related Work Natural Language Understandingmentioning
confidence: 99%
“…Other works treat KB and dialogue history equally as triplet memories (Madotto et al, 2018;Wu et al, 2019;Chen et al, 2019b;He et al, 2020a;Qin et al, 2021a). Memory networks (Sukhbaatar et al, 2015) have been applied to model the dependency between related entity triplets in KB (Bordes et al, 2017;Wang et al, 2020) and improves domain scalability (Qin et al, 2020b;Ma et al, 2021). To improve the response quality with triplet KB representation, Raghu et al (2019) proposed BOSS-NET to disentangle NLG and KB retrieval and Hong et al (2020) generated responses through a template-filling decoder.…”
Section: Triplet Representationmentioning
confidence: 99%
“…Most of the previous work develops personalized (Zhang et al, 2018;Zheng et al, 2020;Song et al, 2021;Chen et al, 2023a), emotional (Ghosal et al, 2020;Zheng et al, 2023a;Deng et al, 2023c;Zheng et al, 2023b), empathetic (Rashkin et al, 2019Sabour et al, 2022) dialogue system in isolation, rather than seamlessly blending them all into one cohesive conversational flow (Smith et al, 2020;. A common approach is to predict the emotion or persona from a pre-defined set and generate the response in a multi-task manner (Ma et al, 2021;Sabour et al, 2022;. Besides that, lots of work notices these linguistic cues underneath text by directly predicting them independently as a classification task Barriere et al, 2022;Ghosh et al, 2022).…”
Section: Related Workmentioning
confidence: 99%